00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 108 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3286 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.002 Started by timer 00:00:00.106 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.107 The recommended git tool is: git 00:00:00.107 using credential 00000000-0000-0000-0000-000000000002 00:00:00.124 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.153 Fetching changes from the remote Git repository 00:00:00.160 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.229 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.229 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.356 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.366 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.377 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:05.377 > git config core.sparsecheckout # timeout=10 00:00:05.387 > git read-tree -mu HEAD # timeout=10 00:00:05.402 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:05.421 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:05.421 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:05.505 [Pipeline] Start of Pipeline 00:00:05.520 [Pipeline] library 00:00:05.521 Loading library shm_lib@master 00:00:05.521 Library shm_lib@master is cached. Copying from home. 00:00:05.535 [Pipeline] node 00:00:05.541 Running on VM-host-SM0 in /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:05.544 [Pipeline] { 00:00:05.553 [Pipeline] catchError 00:00:05.554 [Pipeline] { 00:00:05.563 [Pipeline] wrap 00:00:05.571 [Pipeline] { 00:00:05.580 [Pipeline] stage 00:00:05.583 [Pipeline] { (Prologue) 00:00:05.604 [Pipeline] echo 00:00:05.605 Node: VM-host-SM0 00:00:05.612 [Pipeline] cleanWs 00:00:05.621 [WS-CLEANUP] Deleting project workspace... 00:00:05.621 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.626 [WS-CLEANUP] done 00:00:05.815 [Pipeline] setCustomBuildProperty 00:00:05.877 [Pipeline] httpRequest 00:00:05.896 [Pipeline] echo 00:00:05.897 Sorcerer 10.211.164.101 is alive 00:00:05.904 [Pipeline] httpRequest 00:00:05.907 HttpMethod: GET 00:00:05.908 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.908 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:05.909 Response Code: HTTP/1.1 200 OK 00:00:05.909 Success: Status code 200 is in the accepted range: 200,404 00:00:05.910 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.641 [Pipeline] sh 00:00:06.920 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.935 [Pipeline] httpRequest 00:00:06.958 [Pipeline] echo 00:00:06.959 Sorcerer 10.211.164.101 is alive 00:00:06.966 [Pipeline] httpRequest 00:00:06.970 HttpMethod: GET 00:00:06.970 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:06.971 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:06.972 Response Code: HTTP/1.1 200 OK 00:00:06.973 Success: Status code 200 is in the accepted range: 200,404 00:00:06.973 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:30.083 [Pipeline] sh 00:00:30.358 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:32.916 [Pipeline] sh 00:00:33.190 + git -C spdk log --oneline -n5 00:00:33.190 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:33.190 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:33.190 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:33.190 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:33.190 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:33.210 [Pipeline] withCredentials 00:00:33.219 > git --version # timeout=10 00:00:33.228 > git --version # 'git version 2.39.2' 00:00:33.243 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:33.246 [Pipeline] { 00:00:33.252 [Pipeline] retry 00:00:33.254 [Pipeline] { 00:00:33.266 [Pipeline] sh 00:00:33.544 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:33.811 [Pipeline] } 00:00:33.833 [Pipeline] // retry 00:00:33.838 [Pipeline] } 00:00:33.858 [Pipeline] // withCredentials 00:00:33.868 [Pipeline] httpRequest 00:00:33.894 [Pipeline] echo 00:00:33.896 Sorcerer 10.211.164.101 is alive 00:00:33.906 [Pipeline] httpRequest 00:00:33.911 HttpMethod: GET 00:00:33.911 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.912 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:33.925 Response Code: HTTP/1.1 200 OK 00:00:33.926 Success: Status code 200 is in the accepted range: 200,404 00:00:33.926 Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:53.250 [Pipeline] sh 00:00:53.532 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:54.957 [Pipeline] sh 00:00:55.232 + git -C dpdk log --oneline -n5 00:00:55.233 eeb0605f11 version: 23.11.0 00:00:55.233 238778122a doc: update release notes for 23.11 00:00:55.233 46aa6b3cfc doc: fix description of RSS features 00:00:55.233 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:00:55.233 7e421ae345 devtools: support skipping forbid rule check 00:00:55.250 [Pipeline] writeFile 00:00:55.265 [Pipeline] sh 00:00:55.545 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:55.557 [Pipeline] sh 00:00:55.835 + cat autorun-spdk.conf 00:00:55.835 SPDK_TEST_UNITTEST=1 00:00:55.835 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.835 SPDK_TEST_NVME=1 00:00:55.835 SPDK_TEST_BLOCKDEV=1 00:00:55.835 SPDK_RUN_ASAN=1 00:00:55.835 SPDK_RUN_UBSAN=1 00:00:55.835 SPDK_TEST_RAID5=1 00:00:55.835 SPDK_TEST_NATIVE_DPDK=v23.11 00:00:55.835 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:55.835 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:55.842 RUN_NIGHTLY=1 00:00:55.844 [Pipeline] } 00:00:55.860 [Pipeline] // stage 00:00:55.894 [Pipeline] stage 00:00:55.896 [Pipeline] { (Run VM) 00:00:55.910 [Pipeline] sh 00:00:56.186 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:56.187 + echo 'Start stage prepare_nvme.sh' 00:00:56.187 Start stage prepare_nvme.sh 00:00:56.187 + [[ -n 1 ]] 00:00:56.187 + disk_prefix=ex1 00:00:56.187 + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]] 00:00:56.187 + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]] 00:00:56.187 + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf 00:00:56.187 ++ SPDK_TEST_UNITTEST=1 00:00:56.187 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:56.187 ++ SPDK_TEST_NVME=1 00:00:56.187 ++ SPDK_TEST_BLOCKDEV=1 00:00:56.187 ++ SPDK_RUN_ASAN=1 00:00:56.187 ++ SPDK_RUN_UBSAN=1 00:00:56.187 ++ SPDK_TEST_RAID5=1 00:00:56.187 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:00:56.187 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:56.187 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:56.187 ++ RUN_NIGHTLY=1 00:00:56.187 + cd /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:56.187 + nvme_files=() 00:00:56.187 + declare -A nvme_files 00:00:56.187 + backend_dir=/var/lib/libvirt/images/backends 00:00:56.187 + nvme_files['nvme.img']=5G 00:00:56.187 + nvme_files['nvme-cmb.img']=5G 00:00:56.187 + nvme_files['nvme-multi0.img']=4G 00:00:56.187 + nvme_files['nvme-multi1.img']=4G 00:00:56.187 + nvme_files['nvme-multi2.img']=4G 00:00:56.187 + nvme_files['nvme-openstack.img']=8G 00:00:56.187 + nvme_files['nvme-zns.img']=5G 00:00:56.187 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:56.187 + (( SPDK_TEST_FTL == 1 )) 00:00:56.187 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:56.187 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:56.187 + for nvme in "${!nvme_files[@]}" 00:00:56.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:56.187 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.187 + for nvme in "${!nvme_files[@]}" 00:00:56.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:56.187 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.187 + for nvme in "${!nvme_files[@]}" 00:00:56.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:56.187 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:56.187 + for nvme in "${!nvme_files[@]}" 00:00:56.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:56.187 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.187 + for nvme in "${!nvme_files[@]}" 00:00:56.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:56.187 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.187 + for nvme in "${!nvme_files[@]}" 00:00:56.187 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:56.187 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:56.445 + for nvme in "${!nvme_files[@]}" 00:00:56.445 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:56.445 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:56.445 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:56.445 + echo 'End stage prepare_nvme.sh' 00:00:56.445 End stage prepare_nvme.sh 00:00:56.456 [Pipeline] sh 00:00:56.734 + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:56.734 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f ubuntu2204 00:00:56.734 00:00:56.734 DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant 00:00:56.734 SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk 00:00:56.734 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest 00:00:56.734 HELP=0 00:00:56.734 DRY_RUN=0 00:00:56.734 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img, 00:00:56.734 NVME_DISKS_TYPE=nvme, 00:00:56.734 NVME_AUTO_CREATE=0 00:00:56.734 NVME_DISKS_NAMESPACES=, 00:00:56.734 NVME_CMB=, 00:00:56.734 NVME_PMR=, 00:00:56.734 NVME_ZNS=, 00:00:56.734 NVME_MS=, 00:00:56.734 NVME_FDP=, 00:00:56.734 SPDK_VAGRANT_DISTRO=ubuntu2204 00:00:56.734 SPDK_VAGRANT_VMCPU=10 00:00:56.734 SPDK_VAGRANT_VMRAM=12288 00:00:56.734 SPDK_VAGRANT_PROVIDER=libvirt 00:00:56.734 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:56.734 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:56.734 SPDK_OPENSTACK_NETWORK=0 00:00:56.734 VAGRANT_PACKAGE_BOX=0 00:00:56.734 VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:56.734 FORCE_DISTRO=true 00:00:56.734 VAGRANT_BOX_VERSION= 00:00:56.734 EXTRA_VAGRANTFILES= 00:00:56.734 NIC_MODEL=e1000 00:00:56.734 00:00:56.734 mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt' 00:00:56.734 /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest 00:00:59.264 Bringing machine 'default' up with 'libvirt' provider... 00:01:00.198 ==> default: Creating image (snapshot of base box volume). 00:01:00.198 ==> default: Creating domain with the following settings... 00:01:00.198 ==> default: -- Name: ubuntu2204-22.04-1711172311-2200_default_1721562178_d93ec3bcec46eced5dd9 00:01:00.198 ==> default: -- Domain type: kvm 00:01:00.198 ==> default: -- Cpus: 10 00:01:00.198 ==> default: -- Feature: acpi 00:01:00.198 ==> default: -- Feature: apic 00:01:00.198 ==> default: -- Feature: pae 00:01:00.198 ==> default: -- Memory: 12288M 00:01:00.198 ==> default: -- Memory Backing: hugepages: 00:01:00.198 ==> default: -- Management MAC: 00:01:00.198 ==> default: -- Loader: 00:01:00.198 ==> default: -- Nvram: 00:01:00.198 ==> default: -- Base box: spdk/ubuntu2204 00:01:00.198 ==> default: -- Storage pool: default 00:01:00.198 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1721562178_d93ec3bcec46eced5dd9.img (20G) 00:01:00.198 ==> default: -- Volume Cache: default 00:01:00.198 ==> default: -- Kernel: 00:01:00.198 ==> default: -- Initrd: 00:01:00.198 ==> default: -- Graphics Type: vnc 00:01:00.198 ==> default: -- Graphics Port: -1 00:01:00.198 ==> default: -- Graphics IP: 127.0.0.1 00:01:00.198 ==> default: -- Graphics Password: Not defined 00:01:00.198 ==> default: -- Video Type: cirrus 00:01:00.198 ==> default: -- Video VRAM: 9216 00:01:00.198 ==> default: -- Sound Type: 00:01:00.198 ==> default: -- Keymap: en-us 00:01:00.198 ==> default: -- TPM Path: 00:01:00.198 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:00.198 ==> default: -- Command line args: 00:01:00.198 ==> default: -> value=-device, 00:01:00.198 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:00.198 ==> default: -> value=-drive, 00:01:00.198 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:00.198 ==> default: -> value=-device, 00:01:00.198 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:00.456 ==> default: Creating shared folders metadata... 00:01:00.456 ==> default: Starting domain. 00:01:02.358 ==> default: Waiting for domain to get an IP address... 00:01:14.611 ==> default: Waiting for SSH to become available... 00:01:14.611 ==> default: Configuring and enabling network interfaces... 00:01:19.875 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:24.056 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:28.238 ==> default: Mounting SSHFS shared folder... 00:01:29.174 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output 00:01:29.174 ==> default: Checking Mount.. 00:01:29.741 ==> default: Folder Successfully Mounted! 00:01:29.741 ==> default: Running provisioner: file... 00:01:30.308 default: ~/.gitconfig => .gitconfig 00:01:30.567 00:01:30.567 SUCCESS! 00:01:30.567 00:01:30.567 cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use. 00:01:30.567 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:30.567 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm. 00:01:30.567 00:01:30.574 [Pipeline] } 00:01:30.591 [Pipeline] // stage 00:01:30.599 [Pipeline] dir 00:01:30.599 Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt 00:01:30.601 [Pipeline] { 00:01:30.615 [Pipeline] catchError 00:01:30.616 [Pipeline] { 00:01:30.630 [Pipeline] sh 00:01:30.907 + vagrant ssh-config --host vagrant 00:01:30.907 + sed -ne /^Host/,$p 00:01:30.907 + tee ssh_conf 00:01:34.186 Host vagrant 00:01:34.186 HostName 192.168.121.100 00:01:34.186 User vagrant 00:01:34.186 Port 22 00:01:34.186 UserKnownHostsFile /dev/null 00:01:34.186 StrictHostKeyChecking no 00:01:34.186 PasswordAuthentication no 00:01:34.186 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204 00:01:34.186 IdentitiesOnly yes 00:01:34.186 LogLevel FATAL 00:01:34.186 ForwardAgent yes 00:01:34.186 ForwardX11 yes 00:01:34.186 00:01:34.198 [Pipeline] withEnv 00:01:34.200 [Pipeline] { 00:01:34.214 [Pipeline] sh 00:01:34.489 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:34.489 source /etc/os-release 00:01:34.489 [[ -e /image.version ]] && img=$(< /image.version) 00:01:34.489 # Minimal, systemd-like check. 00:01:34.489 if [[ -e /.dockerenv ]]; then 00:01:34.489 # Clear garbage from the node's name: 00:01:34.489 # agt-er_autotest_547-896 -> autotest_547-896 00:01:34.489 # $HOSTNAME is the actual container id 00:01:34.489 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:34.489 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:34.489 # We can assume this is a mount from a host where container is running, 00:01:34.489 # so fetch its hostname to easily identify the target swarm worker. 00:01:34.489 container="$(< /etc/hostname) ($agent)" 00:01:34.489 else 00:01:34.489 # Fallback 00:01:34.489 container=$agent 00:01:34.489 fi 00:01:34.489 fi 00:01:34.489 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:34.489 00:01:34.755 [Pipeline] } 00:01:34.774 [Pipeline] // withEnv 00:01:34.781 [Pipeline] setCustomBuildProperty 00:01:34.794 [Pipeline] stage 00:01:34.796 [Pipeline] { (Tests) 00:01:34.813 [Pipeline] sh 00:01:35.089 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:35.358 [Pipeline] sh 00:01:35.631 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:35.957 [Pipeline] timeout 00:01:35.957 Timeout set to expire in 1 hr 30 min 00:01:35.959 [Pipeline] { 00:01:35.973 [Pipeline] sh 00:01:36.246 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:36.811 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:01:36.826 [Pipeline] sh 00:01:37.103 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:37.374 [Pipeline] sh 00:01:37.653 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:37.925 [Pipeline] sh 00:01:38.199 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo 00:01:38.457 ++ readlink -f spdk_repo 00:01:38.457 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:38.457 + [[ -n /home/vagrant/spdk_repo ]] 00:01:38.457 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:38.457 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:38.457 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:38.457 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:38.457 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:38.457 + [[ ubuntu22-vg-autotest == pkgdep-* ]] 00:01:38.457 + cd /home/vagrant/spdk_repo 00:01:38.457 + source /etc/os-release 00:01:38.457 ++ PRETTY_NAME='Ubuntu 22.04.4 LTS' 00:01:38.457 ++ NAME=Ubuntu 00:01:38.457 ++ VERSION_ID=22.04 00:01:38.457 ++ VERSION='22.04.4 LTS (Jammy Jellyfish)' 00:01:38.457 ++ VERSION_CODENAME=jammy 00:01:38.457 ++ ID=ubuntu 00:01:38.457 ++ ID_LIKE=debian 00:01:38.457 ++ HOME_URL=https://www.ubuntu.com/ 00:01:38.457 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:38.457 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:38.457 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:38.457 ++ UBUNTU_CODENAME=jammy 00:01:38.457 + uname -a 00:01:38.457 Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:38.457 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:38.765 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:01:38.765 Hugepages 00:01:38.765 node hugesize free / total 00:01:38.765 node0 1048576kB 0 / 0 00:01:38.765 node0 2048kB 0 / 0 00:01:38.765 00:01:38.765 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:38.765 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:38.765 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:38.765 + rm -f /tmp/spdk-ld-path 00:01:38.765 + source autorun-spdk.conf 00:01:38.765 ++ SPDK_TEST_UNITTEST=1 00:01:38.765 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.765 ++ SPDK_TEST_NVME=1 00:01:38.765 ++ SPDK_TEST_BLOCKDEV=1 00:01:38.765 ++ SPDK_RUN_ASAN=1 00:01:38.765 ++ SPDK_RUN_UBSAN=1 00:01:38.765 ++ SPDK_TEST_RAID5=1 00:01:38.765 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:38.765 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.765 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.765 ++ RUN_NIGHTLY=1 00:01:38.765 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:38.765 + [[ -n '' ]] 00:01:38.765 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:38.765 + for M in /var/spdk/build-*-manifest.txt 00:01:38.765 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:38.765 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.765 + for M in /var/spdk/build-*-manifest.txt 00:01:38.765 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:38.765 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:38.765 ++ uname 00:01:38.765 + [[ Linux == \L\i\n\u\x ]] 00:01:38.765 + sudo dmesg -T 00:01:38.765 + sudo dmesg --clear 00:01:38.765 + dmesg_pid=2301 00:01:38.765 + [[ Ubuntu == FreeBSD ]] 00:01:38.765 + sudo dmesg -Tw 00:01:38.765 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.765 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:38.765 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:38.765 + [[ -x /usr/src/fio-static/fio ]] 00:01:38.765 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:38.765 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:38.765 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:38.765 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:38.765 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:38.765 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:38.765 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:38.765 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:38.765 Test configuration: 00:01:38.765 SPDK_TEST_UNITTEST=1 00:01:38.765 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.765 SPDK_TEST_NVME=1 00:01:38.765 SPDK_TEST_BLOCKDEV=1 00:01:38.765 SPDK_RUN_ASAN=1 00:01:38.765 SPDK_RUN_UBSAN=1 00:01:38.765 SPDK_TEST_RAID5=1 00:01:38.765 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:38.765 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:38.765 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.765 RUN_NIGHTLY=1 11:43:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:38.765 11:43:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:38.765 11:43:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:38.765 11:43:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:38.765 11:43:37 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:38.765 11:43:37 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:38.765 11:43:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:38.765 11:43:37 -- paths/export.sh@5 -- $ export PATH 00:01:38.765 11:43:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:38.765 11:43:37 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:38.765 11:43:37 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:38.765 11:43:37 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721562217.XXXXXX 00:01:39.023 11:43:37 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721562217.Tyesvl 00:01:39.023 11:43:37 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:39.023 11:43:37 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:01:39.023 11:43:37 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:39.023 11:43:37 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:39.023 11:43:37 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:39.023 11:43:37 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:39.023 11:43:37 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:39.023 11:43:37 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:39.023 11:43:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.023 11:43:37 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:39.023 11:43:37 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:39.023 11:43:37 -- pm/common@17 -- $ local monitor 00:01:39.023 11:43:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.023 11:43:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:39.023 11:43:37 -- pm/common@21 -- $ date +%s 00:01:39.023 11:43:37 -- pm/common@25 -- $ sleep 1 00:01:39.023 11:43:37 -- pm/common@21 -- $ date +%s 00:01:39.023 11:43:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721562217 00:01:39.023 11:43:37 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721562217 00:01:39.023 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721562217_collect-vmstat.pm.log 00:01:39.023 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721562217_collect-cpu-load.pm.log 00:01:39.979 11:43:38 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:39.979 11:43:38 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:39.979 11:43:38 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:39.979 11:43:38 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:39.979 11:43:38 -- spdk/autobuild.sh@16 -- $ date -u 00:01:39.979 Sun Jul 21 11:43:38 UTC 2024 00:01:39.979 11:43:38 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:39.979 v24.05-13-g5fa2f5086 00:01:39.979 11:43:38 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:39.979 11:43:38 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:39.979 11:43:38 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:39.979 11:43:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:39.979 11:43:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.979 ************************************ 00:01:39.979 START TEST asan 00:01:39.979 ************************************ 00:01:39.979 using asan 00:01:39.979 11:43:38 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:01:39.979 00:01:39.979 real 0m0.000s 00:01:39.979 user 0m0.000s 00:01:39.979 sys 0m0.000s 00:01:39.979 11:43:38 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:39.979 11:43:38 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.979 ************************************ 00:01:39.979 END TEST asan 00:01:39.979 ************************************ 00:01:39.979 11:43:38 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:39.979 11:43:38 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:39.979 11:43:38 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:39.979 11:43:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:39.979 11:43:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.980 ************************************ 00:01:39.980 START TEST ubsan 00:01:39.980 ************************************ 00:01:39.980 using ubsan 00:01:39.980 11:43:38 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:39.980 00:01:39.980 real 0m0.000s 00:01:39.980 user 0m0.000s 00:01:39.980 sys 0m0.000s 00:01:39.980 11:43:38 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:39.980 ************************************ 00:01:39.980 11:43:38 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:39.980 END TEST ubsan 00:01:39.980 ************************************ 00:01:39.980 11:43:38 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:39.980 11:43:38 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:39.980 11:43:38 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:39.980 11:43:38 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:01:39.980 11:43:38 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:39.980 11:43:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:39.980 ************************************ 00:01:39.980 START TEST build_native_dpdk 00:01:39.980 ************************************ 00:01:39.980 11:43:38 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=11 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=11 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:39.980 eeb0605f11 version: 23.11.0 00:01:39.980 238778122a doc: update release notes for 23.11 00:01:39.980 46aa6b3cfc doc: fix description of RSS features 00:01:39.980 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:39.980 7e421ae345 devtools: support skipping forbid rule check 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:39.980 11:43:38 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:39.980 11:43:38 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:40.238 11:43:38 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:40.238 11:43:38 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:40.238 patching file config/rte_config.h 00:01:40.238 Hunk #1 succeeded at 60 (offset 1 line). 00:01:40.238 11:43:38 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:40.238 11:43:38 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:40.238 11:43:38 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:40.238 11:43:38 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:40.238 11:43:38 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:44.422 The Meson build system 00:01:44.422 Version: 1.4.0 00:01:44.422 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:44.422 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:44.422 Build type: native build 00:01:44.422 Program cat found: YES (/usr/bin/cat) 00:01:44.422 Project name: DPDK 00:01:44.422 Project version: 23.11.0 00:01:44.422 C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0") 00:01:44.422 C linker for the host machine: gcc ld.bfd 2.38 00:01:44.422 Host machine cpu family: x86_64 00:01:44.422 Host machine cpu: x86_64 00:01:44.422 Message: ## Building in Developer Mode ## 00:01:44.422 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:44.422 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:44.422 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:44.422 Program python3 found: YES (/usr/bin/python3) 00:01:44.422 Program cat found: YES (/usr/bin/cat) 00:01:44.422 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:44.422 Compiler for C supports arguments -march=native: YES 00:01:44.422 Checking for size of "void *" : 8 00:01:44.422 Checking for size of "void *" : 8 (cached) 00:01:44.422 Library m found: YES 00:01:44.422 Library numa found: YES 00:01:44.422 Has header "numaif.h" : YES 00:01:44.422 Library fdt found: NO 00:01:44.422 Library execinfo found: NO 00:01:44.422 Has header "execinfo.h" : YES 00:01:44.422 Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 00:01:44.422 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:44.422 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:44.422 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:44.422 Run-time dependency openssl found: YES 3.0.2 00:01:44.422 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:44.422 Library pcap found: NO 00:01:44.422 Compiler for C supports arguments -Wcast-qual: YES 00:01:44.422 Compiler for C supports arguments -Wdeprecated: YES 00:01:44.422 Compiler for C supports arguments -Wformat: YES 00:01:44.422 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:44.422 Compiler for C supports arguments -Wformat-security: YES 00:01:44.422 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:44.422 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:44.422 Compiler for C supports arguments -Wnested-externs: YES 00:01:44.422 Compiler for C supports arguments -Wold-style-definition: YES 00:01:44.422 Compiler for C supports arguments -Wpointer-arith: YES 00:01:44.422 Compiler for C supports arguments -Wsign-compare: YES 00:01:44.422 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:44.422 Compiler for C supports arguments -Wundef: YES 00:01:44.422 Compiler for C supports arguments -Wwrite-strings: YES 00:01:44.422 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:44.422 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:44.422 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:44.422 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:44.422 Program objdump found: YES (/usr/bin/objdump) 00:01:44.422 Compiler for C supports arguments -mavx512f: YES 00:01:44.422 Checking if "AVX512 checking" compiles: YES 00:01:44.422 Fetching value of define "__SSE4_2__" : 1 00:01:44.422 Fetching value of define "__AES__" : 1 00:01:44.422 Fetching value of define "__AVX__" : 1 00:01:44.422 Fetching value of define "__AVX2__" : 1 00:01:44.422 Fetching value of define "__AVX512BW__" : (undefined) 00:01:44.422 Fetching value of define "__AVX512CD__" : (undefined) 00:01:44.422 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:44.422 Fetching value of define "__AVX512F__" : (undefined) 00:01:44.422 Fetching value of define "__AVX512VL__" : (undefined) 00:01:44.422 Fetching value of define "__PCLMUL__" : 1 00:01:44.422 Fetching value of define "__RDRND__" : 1 00:01:44.422 Fetching value of define "__RDSEED__" : 1 00:01:44.422 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:44.422 Fetching value of define "__znver1__" : (undefined) 00:01:44.422 Fetching value of define "__znver2__" : (undefined) 00:01:44.422 Fetching value of define "__znver3__" : (undefined) 00:01:44.422 Fetching value of define "__znver4__" : (undefined) 00:01:44.422 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:44.422 Message: lib/log: Defining dependency "log" 00:01:44.422 Message: lib/kvargs: Defining dependency "kvargs" 00:01:44.422 Message: lib/telemetry: Defining dependency "telemetry" 00:01:44.422 Checking for function "getentropy" : NO 00:01:44.422 Message: lib/eal: Defining dependency "eal" 00:01:44.422 Message: lib/ring: Defining dependency "ring" 00:01:44.422 Message: lib/rcu: Defining dependency "rcu" 00:01:44.422 Message: lib/mempool: Defining dependency "mempool" 00:01:44.422 Message: lib/mbuf: Defining dependency "mbuf" 00:01:44.422 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:44.422 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.422 Compiler for C supports arguments -mpclmul: YES 00:01:44.422 Compiler for C supports arguments -maes: YES 00:01:44.422 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:44.422 Compiler for C supports arguments -mavx512bw: YES 00:01:44.422 Compiler for C supports arguments -mavx512dq: YES 00:01:44.422 Compiler for C supports arguments -mavx512vl: YES 00:01:44.422 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:44.422 Compiler for C supports arguments -mavx2: YES 00:01:44.422 Compiler for C supports arguments -mavx: YES 00:01:44.422 Message: lib/net: Defining dependency "net" 00:01:44.422 Message: lib/meter: Defining dependency "meter" 00:01:44.422 Message: lib/ethdev: Defining dependency "ethdev" 00:01:44.422 Message: lib/pci: Defining dependency "pci" 00:01:44.422 Message: lib/cmdline: Defining dependency "cmdline" 00:01:44.422 Message: lib/metrics: Defining dependency "metrics" 00:01:44.422 Message: lib/hash: Defining dependency "hash" 00:01:44.422 Message: lib/timer: Defining dependency "timer" 00:01:44.422 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.422 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:44.422 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:44.422 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:44.422 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:44.422 Message: lib/acl: Defining dependency "acl" 00:01:44.422 Message: lib/bbdev: Defining dependency "bbdev" 00:01:44.422 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:44.422 Run-time dependency libelf found: YES 0.186 00:01:44.422 lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled 00:01:44.422 Message: lib/bpf: Defining dependency "bpf" 00:01:44.422 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:44.422 Message: lib/compressdev: Defining dependency "compressdev" 00:01:44.422 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:44.422 Message: lib/distributor: Defining dependency "distributor" 00:01:44.422 Message: lib/dmadev: Defining dependency "dmadev" 00:01:44.422 Message: lib/efd: Defining dependency "efd" 00:01:44.422 Message: lib/eventdev: Defining dependency "eventdev" 00:01:44.422 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:44.422 Message: lib/gpudev: Defining dependency "gpudev" 00:01:44.422 Message: lib/gro: Defining dependency "gro" 00:01:44.422 Message: lib/gso: Defining dependency "gso" 00:01:44.422 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:44.422 Message: lib/jobstats: Defining dependency "jobstats" 00:01:44.422 Message: lib/latencystats: Defining dependency "latencystats" 00:01:44.422 Message: lib/lpm: Defining dependency "lpm" 00:01:44.422 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.422 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:44.422 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:44.422 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:44.422 Message: lib/member: Defining dependency "member" 00:01:44.422 Message: lib/pcapng: Defining dependency "pcapng" 00:01:44.422 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:44.422 Message: lib/power: Defining dependency "power" 00:01:44.422 Message: lib/rawdev: Defining dependency "rawdev" 00:01:44.422 Message: lib/regexdev: Defining dependency "regexdev" 00:01:44.422 Message: lib/mldev: Defining dependency "mldev" 00:01:44.422 Message: lib/rib: Defining dependency "rib" 00:01:44.422 Message: lib/reorder: Defining dependency "reorder" 00:01:44.422 Message: lib/sched: Defining dependency "sched" 00:01:44.422 Message: lib/security: Defining dependency "security" 00:01:44.422 Message: lib/stack: Defining dependency "stack" 00:01:44.422 Has header "linux/userfaultfd.h" : YES 00:01:44.422 Has header "linux/vduse.h" : YES 00:01:44.422 Message: lib/vhost: Defining dependency "vhost" 00:01:44.422 Message: lib/ipsec: Defining dependency "ipsec" 00:01:44.422 Message: lib/pdcp: Defining dependency "pdcp" 00:01:44.422 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:44.422 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:44.422 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:44.422 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:44.422 Message: lib/fib: Defining dependency "fib" 00:01:44.422 Message: lib/port: Defining dependency "port" 00:01:44.422 Message: lib/pdump: Defining dependency "pdump" 00:01:44.422 Message: lib/table: Defining dependency "table" 00:01:44.422 Message: lib/pipeline: Defining dependency "pipeline" 00:01:44.422 Message: lib/graph: Defining dependency "graph" 00:01:44.422 Message: lib/node: Defining dependency "node" 00:01:46.322 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.322 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.322 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.322 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.322 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:46.322 Compiler for C supports arguments -Wno-unused-value: YES 00:01:46.322 Compiler for C supports arguments -Wno-format: YES 00:01:46.322 Compiler for C supports arguments -Wno-format-security: YES 00:01:46.322 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:46.322 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:46.322 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:46.322 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:46.322 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:46.322 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.322 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:46.322 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:46.322 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:46.322 Has header "sys/epoll.h" : YES 00:01:46.322 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.322 Configuring doxy-api-html.conf using configuration 00:01:46.322 Configuring doxy-api-man.conf using configuration 00:01:46.322 Program mandb found: YES (/usr/bin/mandb) 00:01:46.322 Program sphinx-build found: NO 00:01:46.322 Configuring rte_build_config.h using configuration 00:01:46.322 Message: 00:01:46.322 ================= 00:01:46.322 Applications Enabled 00:01:46.322 ================= 00:01:46.322 00:01:46.322 apps: 00:01:46.322 graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:46.322 test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, test-pmd, 00:01:46.322 test-regex, test-sad, test-security-perf, 00:01:46.322 00:01:46.322 Message: 00:01:46.322 ================= 00:01:46.322 Libraries Enabled 00:01:46.322 ================= 00:01:46.322 00:01:46.322 libs: 00:01:46.322 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.322 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:46.322 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:46.322 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:46.322 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:46.322 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:46.322 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:46.322 00:01:46.322 00:01:46.322 Message: 00:01:46.322 =============== 00:01:46.322 Drivers Enabled 00:01:46.322 =============== 00:01:46.322 00:01:46.322 common: 00:01:46.322 00:01:46.322 bus: 00:01:46.322 pci, vdev, 00:01:46.322 mempool: 00:01:46.322 ring, 00:01:46.322 dma: 00:01:46.322 00:01:46.322 net: 00:01:46.322 i40e, 00:01:46.322 raw: 00:01:46.322 00:01:46.322 crypto: 00:01:46.322 00:01:46.322 compress: 00:01:46.322 00:01:46.322 regex: 00:01:46.322 00:01:46.322 ml: 00:01:46.322 00:01:46.322 vdpa: 00:01:46.322 00:01:46.322 event: 00:01:46.322 00:01:46.322 baseband: 00:01:46.322 00:01:46.322 gpu: 00:01:46.322 00:01:46.322 00:01:46.322 Message: 00:01:46.322 ================= 00:01:46.322 Content Skipped 00:01:46.322 ================= 00:01:46.322 00:01:46.322 apps: 00:01:46.322 dumpcap: missing dependency, "libpcap" 00:01:46.322 00:01:46.322 libs: 00:01:46.322 00:01:46.322 drivers: 00:01:46.322 common/cpt: not in enabled drivers build config 00:01:46.322 common/dpaax: not in enabled drivers build config 00:01:46.322 common/iavf: not in enabled drivers build config 00:01:46.322 common/idpf: not in enabled drivers build config 00:01:46.322 common/mvep: not in enabled drivers build config 00:01:46.322 common/octeontx: not in enabled drivers build config 00:01:46.322 bus/auxiliary: not in enabled drivers build config 00:01:46.322 bus/cdx: not in enabled drivers build config 00:01:46.322 bus/dpaa: not in enabled drivers build config 00:01:46.322 bus/fslmc: not in enabled drivers build config 00:01:46.322 bus/ifpga: not in enabled drivers build config 00:01:46.322 bus/platform: not in enabled drivers build config 00:01:46.322 bus/vmbus: not in enabled drivers build config 00:01:46.322 common/cnxk: not in enabled drivers build config 00:01:46.322 common/mlx5: not in enabled drivers build config 00:01:46.322 common/nfp: not in enabled drivers build config 00:01:46.322 common/qat: not in enabled drivers build config 00:01:46.322 common/sfc_efx: not in enabled drivers build config 00:01:46.322 mempool/bucket: not in enabled drivers build config 00:01:46.322 mempool/cnxk: not in enabled drivers build config 00:01:46.322 mempool/dpaa: not in enabled drivers build config 00:01:46.322 mempool/dpaa2: not in enabled drivers build config 00:01:46.322 mempool/octeontx: not in enabled drivers build config 00:01:46.322 mempool/stack: not in enabled drivers build config 00:01:46.322 dma/cnxk: not in enabled drivers build config 00:01:46.322 dma/dpaa: not in enabled drivers build config 00:01:46.322 dma/dpaa2: not in enabled drivers build config 00:01:46.322 dma/hisilicon: not in enabled drivers build config 00:01:46.322 dma/idxd: not in enabled drivers build config 00:01:46.322 dma/ioat: not in enabled drivers build config 00:01:46.322 dma/skeleton: not in enabled drivers build config 00:01:46.322 net/af_packet: not in enabled drivers build config 00:01:46.322 net/af_xdp: not in enabled drivers build config 00:01:46.323 net/ark: not in enabled drivers build config 00:01:46.323 net/atlantic: not in enabled drivers build config 00:01:46.323 net/avp: not in enabled drivers build config 00:01:46.323 net/axgbe: not in enabled drivers build config 00:01:46.323 net/bnx2x: not in enabled drivers build config 00:01:46.323 net/bnxt: not in enabled drivers build config 00:01:46.323 net/bonding: not in enabled drivers build config 00:01:46.323 net/cnxk: not in enabled drivers build config 00:01:46.323 net/cpfl: not in enabled drivers build config 00:01:46.323 net/cxgbe: not in enabled drivers build config 00:01:46.323 net/dpaa: not in enabled drivers build config 00:01:46.323 net/dpaa2: not in enabled drivers build config 00:01:46.323 net/e1000: not in enabled drivers build config 00:01:46.323 net/ena: not in enabled drivers build config 00:01:46.323 net/enetc: not in enabled drivers build config 00:01:46.323 net/enetfec: not in enabled drivers build config 00:01:46.323 net/enic: not in enabled drivers build config 00:01:46.323 net/failsafe: not in enabled drivers build config 00:01:46.323 net/fm10k: not in enabled drivers build config 00:01:46.323 net/gve: not in enabled drivers build config 00:01:46.323 net/hinic: not in enabled drivers build config 00:01:46.323 net/hns3: not in enabled drivers build config 00:01:46.323 net/iavf: not in enabled drivers build config 00:01:46.323 net/ice: not in enabled drivers build config 00:01:46.323 net/idpf: not in enabled drivers build config 00:01:46.323 net/igc: not in enabled drivers build config 00:01:46.323 net/ionic: not in enabled drivers build config 00:01:46.323 net/ipn3ke: not in enabled drivers build config 00:01:46.323 net/ixgbe: not in enabled drivers build config 00:01:46.323 net/mana: not in enabled drivers build config 00:01:46.323 net/memif: not in enabled drivers build config 00:01:46.323 net/mlx4: not in enabled drivers build config 00:01:46.323 net/mlx5: not in enabled drivers build config 00:01:46.323 net/mvneta: not in enabled drivers build config 00:01:46.323 net/mvpp2: not in enabled drivers build config 00:01:46.323 net/netvsc: not in enabled drivers build config 00:01:46.323 net/nfb: not in enabled drivers build config 00:01:46.323 net/nfp: not in enabled drivers build config 00:01:46.323 net/ngbe: not in enabled drivers build config 00:01:46.323 net/null: not in enabled drivers build config 00:01:46.323 net/octeontx: not in enabled drivers build config 00:01:46.323 net/octeon_ep: not in enabled drivers build config 00:01:46.323 net/pcap: not in enabled drivers build config 00:01:46.323 net/pfe: not in enabled drivers build config 00:01:46.323 net/qede: not in enabled drivers build config 00:01:46.323 net/ring: not in enabled drivers build config 00:01:46.323 net/sfc: not in enabled drivers build config 00:01:46.323 net/softnic: not in enabled drivers build config 00:01:46.323 net/tap: not in enabled drivers build config 00:01:46.323 net/thunderx: not in enabled drivers build config 00:01:46.323 net/txgbe: not in enabled drivers build config 00:01:46.323 net/vdev_netvsc: not in enabled drivers build config 00:01:46.323 net/vhost: not in enabled drivers build config 00:01:46.323 net/virtio: not in enabled drivers build config 00:01:46.323 net/vmxnet3: not in enabled drivers build config 00:01:46.323 raw/cnxk_bphy: not in enabled drivers build config 00:01:46.323 raw/cnxk_gpio: not in enabled drivers build config 00:01:46.323 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:46.323 raw/ifpga: not in enabled drivers build config 00:01:46.323 raw/ntb: not in enabled drivers build config 00:01:46.323 raw/skeleton: not in enabled drivers build config 00:01:46.323 crypto/armv8: not in enabled drivers build config 00:01:46.323 crypto/bcmfs: not in enabled drivers build config 00:01:46.323 crypto/caam_jr: not in enabled drivers build config 00:01:46.323 crypto/ccp: not in enabled drivers build config 00:01:46.323 crypto/cnxk: not in enabled drivers build config 00:01:46.323 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.323 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.323 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.323 crypto/mlx5: not in enabled drivers build config 00:01:46.323 crypto/mvsam: not in enabled drivers build config 00:01:46.323 crypto/nitrox: not in enabled drivers build config 00:01:46.323 crypto/null: not in enabled drivers build config 00:01:46.323 crypto/octeontx: not in enabled drivers build config 00:01:46.323 crypto/openssl: not in enabled drivers build config 00:01:46.323 crypto/scheduler: not in enabled drivers build config 00:01:46.323 crypto/uadk: not in enabled drivers build config 00:01:46.323 crypto/virtio: not in enabled drivers build config 00:01:46.323 compress/isal: not in enabled drivers build config 00:01:46.323 compress/mlx5: not in enabled drivers build config 00:01:46.323 compress/octeontx: not in enabled drivers build config 00:01:46.323 compress/zlib: not in enabled drivers build config 00:01:46.323 regex/mlx5: not in enabled drivers build config 00:01:46.323 regex/cn9k: not in enabled drivers build config 00:01:46.323 ml/cnxk: not in enabled drivers build config 00:01:46.323 vdpa/ifc: not in enabled drivers build config 00:01:46.323 vdpa/mlx5: not in enabled drivers build config 00:01:46.323 vdpa/nfp: not in enabled drivers build config 00:01:46.323 vdpa/sfc: not in enabled drivers build config 00:01:46.323 event/cnxk: not in enabled drivers build config 00:01:46.323 event/dlb2: not in enabled drivers build config 00:01:46.323 event/dpaa: not in enabled drivers build config 00:01:46.323 event/dpaa2: not in enabled drivers build config 00:01:46.323 event/dsw: not in enabled drivers build config 00:01:46.323 event/opdl: not in enabled drivers build config 00:01:46.323 event/skeleton: not in enabled drivers build config 00:01:46.323 event/sw: not in enabled drivers build config 00:01:46.323 event/octeontx: not in enabled drivers build config 00:01:46.323 baseband/acc: not in enabled drivers build config 00:01:46.323 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:46.323 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:46.323 baseband/la12xx: not in enabled drivers build config 00:01:46.323 baseband/null: not in enabled drivers build config 00:01:46.323 baseband/turbo_sw: not in enabled drivers build config 00:01:46.323 gpu/cuda: not in enabled drivers build config 00:01:46.323 00:01:46.323 00:01:46.323 Build targets in project: 219 00:01:46.323 00:01:46.323 DPDK 23.11.0 00:01:46.323 00:01:46.323 User defined options 00:01:46.323 libdir : lib 00:01:46.323 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:46.323 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:46.323 c_link_args : 00:01:46.323 enable_docs : false 00:01:46.323 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:46.323 enable_kmods : false 00:01:46.323 machine : native 00:01:46.323 tests : false 00:01:46.323 00:01:46.323 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.323 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:46.581 11:43:44 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:46.581 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:46.581 [1/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.581 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:46.581 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:46.581 [4/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.839 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.839 [6/707] Linking static target lib/librte_kvargs.a 00:01:46.839 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.839 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.839 [9/707] Linking static target lib/librte_log.a 00:01:46.839 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.839 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.097 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.097 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:47.097 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.097 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:47.355 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.355 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.355 [18/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.355 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.355 [20/707] Linking target lib/librte_log.so.24.0 00:01:47.355 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.355 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.355 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:47.613 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.613 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:47.613 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:47.613 [27/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:47.613 [28/707] Linking static target lib/librte_telemetry.a 00:01:47.613 [29/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.613 [30/707] Linking target lib/librte_kvargs.so.24.0 00:01:47.871 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:47.871 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:47.871 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.871 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.871 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:47.871 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.871 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.130 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.130 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.130 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.130 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.130 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.130 [43/707] Linking target lib/librte_telemetry.so.24.0 00:01:48.130 [44/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.387 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.387 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:48.387 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.387 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.644 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.644 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.644 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.644 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.644 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.644 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.901 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.901 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.901 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.901 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.901 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.901 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.901 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.901 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:48.901 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.901 [64/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.158 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.158 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.158 [67/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.158 [68/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.415 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:49.415 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.415 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.415 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:49.415 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.415 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.415 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.415 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:49.415 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:49.672 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:49.672 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:49.672 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:49.929 [81/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:49.929 [82/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:49.929 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:49.929 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:49.929 [85/707] Linking static target lib/librte_ring.a 00:01:50.186 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:50.186 [87/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:50.186 [88/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.186 [89/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:50.186 [90/707] Linking static target lib/librte_eal.a 00:01:50.186 [91/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:50.186 [92/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:50.445 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:50.445 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:50.445 [95/707] Linking static target lib/librte_mempool.a 00:01:50.445 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:50.445 [97/707] Linking static target lib/librte_rcu.a 00:01:50.702 [98/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:50.702 [99/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:50.702 [100/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:50.702 [101/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:50.702 [102/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:50.702 [103/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:50.702 [104/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.960 [105/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:50.960 [106/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:50.960 [107/707] Linking static target lib/librte_net.a 00:01:50.960 [108/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:50.960 [109/707] Linking static target lib/librte_meter.a 00:01:50.960 [110/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.960 [111/707] Linking static target lib/librte_mbuf.a 00:01:51.217 [112/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.217 [113/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.217 [114/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.217 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.217 [116/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.217 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.475 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.733 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.733 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.733 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:52.299 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:52.299 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.299 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:52.299 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.299 [126/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:52.299 [127/707] Linking static target lib/librte_pci.a 00:01:52.299 [128/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:52.299 [129/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:52.299 [130/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:52.299 [131/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:52.299 [132/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.556 [133/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:52.556 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:52.556 [135/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.556 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:52.556 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:52.556 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:52.556 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:52.556 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:52.814 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.814 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:52.814 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.814 [144/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.814 [145/707] Linking static target lib/librte_cmdline.a 00:01:53.072 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:53.072 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:53.072 [148/707] Linking static target lib/librte_metrics.a 00:01:53.072 [149/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:53.072 [150/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:53.330 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.638 [152/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:53.638 [153/707] Linking static target lib/librte_timer.a 00:01:53.638 [154/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.638 [155/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.895 [156/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.895 [157/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:54.153 [158/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:54.153 [159/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:54.153 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:54.411 [161/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:54.669 [162/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:54.669 [163/707] Linking static target lib/librte_bitratestats.a 00:01:54.669 [164/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:54.669 [165/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:54.927 [166/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.927 [167/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:54.927 [168/707] Linking static target lib/librte_bbdev.a 00:01:54.927 [169/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:54.927 [170/707] Linking static target lib/librte_hash.a 00:01:55.184 [171/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:55.184 [172/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:55.184 [173/707] Linking static target lib/acl/libavx2_tmp.a 00:01:55.442 [174/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:55.442 [175/707] Linking static target lib/librte_ethdev.a 00:01:55.442 [176/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:55.700 [177/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.700 [178/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.700 [179/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:55.700 [180/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:55.957 [181/707] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:55.957 [182/707] Linking static target lib/acl/libavx512_tmp.a 00:01:55.957 [183/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:55.957 [184/707] Linking static target lib/librte_acl.a 00:01:55.957 [185/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:55.957 [186/707] Linking static target lib/librte_cfgfile.a 00:01:55.957 [187/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:56.215 [188/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.215 [189/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.215 [190/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:56.215 [191/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.215 [192/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.472 [193/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.472 [194/707] Linking static target lib/librte_compressdev.a 00:01:56.472 [195/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:56.472 [196/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:56.472 [197/707] Linking static target lib/librte_bpf.a 00:01:56.730 [198/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:56.730 [199/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.730 [200/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.987 [201/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:56.987 [202/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.987 [203/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:56.987 [204/707] Linking static target lib/librte_distributor.a 00:01:57.245 [205/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.245 [206/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.245 [207/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:57.245 [208/707] Linking static target lib/librte_dmadev.a 00:01:57.245 [209/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.502 [210/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.502 [211/707] Linking target lib/librte_eal.so.24.0 00:01:57.758 [212/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.759 [213/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:57.759 [214/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:57.759 [215/707] Linking target lib/librte_ring.so.24.0 00:01:57.759 [216/707] Linking target lib/librte_meter.so.24.0 00:01:57.759 [217/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:57.759 [218/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:57.759 [219/707] Linking target lib/librte_rcu.so.24.0 00:01:57.759 [220/707] Linking target lib/librte_mempool.so.24.0 00:01:58.016 [221/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:58.016 [222/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:58.016 [223/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:58.016 [224/707] Linking target lib/librte_pci.so.24.0 00:01:58.016 [225/707] Linking target lib/librte_mbuf.so.24.0 00:01:58.016 [226/707] Linking target lib/librte_timer.so.24.0 00:01:58.016 [227/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:58.016 [228/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:58.273 [229/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:58.273 [230/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:58.273 [231/707] Linking target lib/librte_net.so.24.0 00:01:58.273 [232/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:58.273 [233/707] Linking target lib/librte_acl.so.24.0 00:01:58.273 [234/707] Linking target lib/librte_bbdev.so.24.0 00:01:58.273 [235/707] Linking target lib/librte_cfgfile.so.24.0 00:01:58.273 [236/707] Linking target lib/librte_compressdev.so.24.0 00:01:58.273 [237/707] Linking static target lib/librte_efd.a 00:01:58.273 [238/707] Linking target lib/librte_dmadev.so.24.0 00:01:58.273 [239/707] Linking target lib/librte_distributor.so.24.0 00:01:58.273 [240/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:58.273 [241/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:58.273 [242/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:58.273 [243/707] Linking target lib/librte_cmdline.so.24.0 00:01:58.273 [244/707] Linking static target lib/librte_cryptodev.a 00:01:58.273 [245/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:58.273 [246/707] Linking target lib/librte_hash.so.24.0 00:01:58.273 [247/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:58.531 [248/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:58.531 [249/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.531 [250/707] Linking target lib/librte_efd.so.24.0 00:01:58.788 [251/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:59.046 [252/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:59.046 [253/707] Linking static target lib/librte_dispatcher.a 00:01:59.046 [254/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:59.046 [255/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:59.046 [256/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:59.046 [257/707] Linking static target lib/librte_gpudev.a 00:01:59.304 [258/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:59.304 [259/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.562 [260/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:59.562 [261/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.562 [262/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:59.562 [263/707] Linking target lib/librte_cryptodev.so.24.0 00:01:59.820 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:59.820 [265/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:59.820 [266/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.820 [267/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:59.820 [268/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:59.820 [269/707] Linking target lib/librte_gpudev.so.24.0 00:02:00.077 [270/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:00.077 [271/707] Linking static target lib/librte_gro.a 00:02:00.077 [272/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:00.077 [273/707] Linking static target lib/librte_eventdev.a 00:02:00.077 [274/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:00.077 [275/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:00.077 [276/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.335 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:00.335 [278/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:00.335 [279/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:00.335 [280/707] Linking static target lib/librte_gso.a 00:02:00.593 [281/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.593 [282/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.593 [283/707] Linking target lib/librte_ethdev.so.24.0 00:02:00.593 [284/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:00.593 [285/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:00.593 [286/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:00.593 [287/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:00.593 [288/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:00.593 [289/707] Linking static target lib/librte_jobstats.a 00:02:00.851 [290/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:00.851 [291/707] Linking target lib/librte_metrics.so.24.0 00:02:00.851 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:00.851 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:00.851 [294/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:00.851 [295/707] Linking target lib/librte_bpf.so.24.0 00:02:00.851 [296/707] Linking target lib/librte_gro.so.24.0 00:02:00.851 [297/707] Linking target lib/librte_bitratestats.so.24.0 00:02:00.851 [298/707] Linking static target lib/librte_ip_frag.a 00:02:00.851 [299/707] Linking target lib/librte_gso.so.24.0 00:02:01.109 [300/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:01.109 [301/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.109 [302/707] Linking target lib/librte_jobstats.so.24.0 00:02:01.109 [303/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:01.109 [304/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:01.109 [305/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:01.109 [306/707] Linking static target lib/librte_latencystats.a 00:02:01.109 [307/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:01.109 [308/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.367 [309/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:01.367 [310/707] Linking target lib/librte_ip_frag.so.24.0 00:02:01.367 [311/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.367 [312/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:01.367 [313/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.367 [314/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.367 [315/707] Linking target lib/librte_latencystats.so.24.0 00:02:01.626 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.626 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:01.626 [318/707] Linking static target lib/librte_lpm.a 00:02:01.884 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.884 [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:01.884 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.884 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:01.884 [323/707] Linking static target lib/librte_pcapng.a 00:02:01.884 [324/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:02.142 [325/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:02.142 [326/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.142 [327/707] Linking target lib/librte_lpm.so.24.0 00:02:02.142 [328/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.142 [329/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:02.142 [330/707] Linking target lib/librte_pcapng.so.24.0 00:02:02.142 [331/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:02.401 [332/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:02.401 [333/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:02.401 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:02.401 [335/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:02.661 [336/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:02.661 [337/707] Linking static target lib/librte_power.a 00:02:02.661 [338/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:02.661 [339/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:02.661 [340/707] Linking static target lib/librte_regexdev.a 00:02:02.661 [341/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:02.661 [342/707] Linking static target lib/librte_member.a 00:02:02.661 [343/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:02.661 [344/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:02.920 [345/707] Linking static target lib/librte_rawdev.a 00:02:02.920 [346/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.920 [347/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:02.920 [348/707] Linking target lib/librte_eventdev.so.24.0 00:02:02.920 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:02.920 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:02.920 [351/707] Linking static target lib/librte_mldev.a 00:02:02.920 [352/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:03.178 [353/707] Linking target lib/librte_dispatcher.so.24.0 00:02:03.178 [354/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.178 [355/707] Linking target lib/librte_member.so.24.0 00:02:03.178 [356/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.178 [357/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:03.178 [358/707] Linking target lib/librte_rawdev.so.24.0 00:02:03.178 [359/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.436 [360/707] Linking target lib/librte_power.so.24.0 00:02:03.436 [361/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:03.436 [362/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:03.436 [363/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.436 [364/707] Linking target lib/librte_regexdev.so.24.0 00:02:03.436 [365/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.436 [366/707] Linking static target lib/librte_reorder.a 00:02:03.693 [367/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:03.693 [368/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:03.693 [369/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:03.693 [370/707] Linking static target lib/librte_rib.a 00:02:03.693 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:03.951 [372/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:03.951 [373/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.951 [374/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:03.951 [375/707] Linking target lib/librte_reorder.so.24.0 00:02:03.951 [376/707] Linking static target lib/librte_stack.a 00:02:03.951 [377/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:03.951 [378/707] Linking static target lib/librte_security.a 00:02:03.951 [379/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:03.951 [380/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.212 [381/707] Linking target lib/librte_stack.so.24.0 00:02:04.212 [382/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.212 [383/707] Linking target lib/librte_rib.so.24.0 00:02:04.212 [384/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:04.212 [385/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:04.212 [386/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.470 [387/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:04.470 [388/707] Linking target lib/librte_security.so.24.0 00:02:04.470 [389/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.470 [390/707] Linking target lib/librte_mldev.so.24.0 00:02:04.470 [391/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:04.470 [392/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:04.470 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:04.470 [394/707] Linking static target lib/librte_sched.a 00:02:05.034 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:05.034 [396/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.034 [397/707] Linking target lib/librte_sched.so.24.0 00:02:05.034 [398/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:05.034 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:05.034 [400/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:05.290 [401/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:05.290 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:05.547 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:05.547 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:05.547 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:05.805 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:05.805 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:06.061 [408/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:06.061 [409/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:06.061 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:06.061 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:06.061 [412/707] Linking static target lib/librte_ipsec.a 00:02:06.061 [413/707] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:06.061 [414/707] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:06.319 [415/707] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:06.319 [416/707] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:06.319 [417/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:06.319 [418/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:06.576 [419/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:06.576 [420/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.576 [421/707] Linking target lib/librte_ipsec.so.24.0 00:02:06.834 [422/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:07.092 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:07.092 [424/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:07.092 [425/707] Linking static target lib/librte_fib.a 00:02:07.092 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:07.092 [427/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:07.092 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:07.092 [429/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:07.092 [430/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:07.092 [431/707] Linking static target lib/librte_pdcp.a 00:02:07.350 [432/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.350 [433/707] Linking target lib/librte_fib.so.24.0 00:02:07.607 [434/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:07.607 [435/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.607 [436/707] Linking target lib/librte_pdcp.so.24.0 00:02:07.865 [437/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:07.865 [438/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:07.865 [439/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:07.865 [440/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:08.123 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:08.123 [442/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:08.123 [443/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:08.381 [444/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:08.381 [445/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:08.381 [446/707] Linking static target lib/librte_port.a 00:02:08.639 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:08.639 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:08.639 [449/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:08.639 [450/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:08.639 [451/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:08.897 [452/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:08.897 [453/707] Linking static target lib/librte_pdump.a 00:02:08.897 [454/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:08.897 [455/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:09.156 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.156 [457/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.156 [458/707] Linking target lib/librte_pdump.so.24.0 00:02:09.156 [459/707] Linking target lib/librte_port.so.24.0 00:02:09.156 [460/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:09.415 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:09.415 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:09.674 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:09.675 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:09.675 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:09.675 [466/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:09.675 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:09.675 [468/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:09.934 [469/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:09.934 [470/707] Linking static target lib/librte_table.a 00:02:09.934 [471/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:10.192 [472/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:10.192 [473/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:10.450 [474/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.450 [475/707] Linking target lib/librte_table.so.24.0 00:02:10.708 [476/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:10.708 [477/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:10.708 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:10.708 [479/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:10.708 [480/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:11.272 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:11.272 [482/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:11.272 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:11.272 [484/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:11.272 [485/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:11.529 [486/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:11.529 [487/707] Linking static target lib/librte_graph.a 00:02:11.529 [488/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:11.786 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:11.786 [490/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:11.786 [491/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:11.786 [492/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:12.349 [493/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:12.349 [494/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:12.349 [495/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.349 [496/707] Linking target lib/librte_graph.so.24.0 00:02:12.606 [497/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:12.606 [498/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:12.606 [499/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:12.606 [500/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:12.606 [501/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:12.606 [502/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:12.863 [503/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:12.863 [504/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:12.863 [505/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:13.120 [506/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:13.120 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.377 [508/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:13.377 [509/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.377 [510/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.377 [511/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:13.377 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.377 [513/707] Linking static target lib/librte_node.a 00:02:13.377 [514/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.634 [515/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.634 [516/707] Linking target lib/librte_node.so.24.0 00:02:13.634 [517/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.634 [518/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.891 [519/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.891 [520/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.891 [521/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.891 [522/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.891 [523/707] Linking static target drivers/librte_bus_pci.a 00:02:14.148 [524/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:14.148 [525/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:14.148 [526/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.148 [527/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.148 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:14.148 [529/707] Linking static target drivers/librte_bus_vdev.a 00:02:14.148 [530/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:14.148 [531/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:14.148 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:14.148 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:14.406 [534/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.406 [535/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:14.406 [536/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.406 [537/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.406 [538/707] Linking static target drivers/librte_mempool_ring.a 00:02:14.406 [539/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.406 [540/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:14.406 [541/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:14.692 [542/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:14.692 [543/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.692 [544/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:14.692 [545/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:14.949 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:15.207 [547/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:15.207 [548/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:15.464 [549/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:16.029 [550/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:16.029 [551/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:16.287 [552/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:16.287 [553/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:16.287 [554/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:16.287 [555/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:16.545 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:16.803 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:16.803 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:16.803 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:16.803 [560/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:17.061 [561/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:17.061 [562/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:17.319 [563/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:17.319 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:17.577 [565/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:17.577 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:17.840 [567/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:17.840 [568/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:18.098 [569/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:18.098 [570/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:18.098 [571/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:18.098 [572/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:18.356 [573/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:18.356 [574/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:18.356 [575/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:18.615 [576/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:18.615 [577/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:18.872 [578/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:18.872 [579/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:18.872 [580/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:18.872 [581/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:18.873 [582/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:19.130 [583/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:19.130 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:19.130 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:19.130 [586/707] Linking static target drivers/librte_net_i40e.a 00:02:19.387 [587/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:19.387 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:19.387 [589/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:19.387 [590/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:19.387 [591/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:19.645 [592/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.904 [593/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:19.904 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:19.904 [595/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.904 [596/707] Linking static target lib/librte_vhost.a 00:02:19.904 [597/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:19.904 [598/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:20.162 [599/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:20.162 [600/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:20.162 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:20.162 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:20.419 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:20.677 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:20.677 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:20.993 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:20.993 [607/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:20.993 [608/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:20.993 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:20.993 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:20.993 [611/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:21.273 [612/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:21.273 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:21.273 [614/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:21.273 [615/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.273 [616/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:21.273 [617/707] Linking target lib/librte_vhost.so.24.0 00:02:21.531 [618/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:21.531 [619/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:21.790 [620/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:21.790 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:22.726 [622/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:22.726 [623/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:22.726 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:22.726 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:22.726 [626/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:22.726 [627/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:22.726 [628/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:22.726 [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:22.984 [630/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:22.984 [631/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:22.984 [632/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:22.984 [633/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:22.984 [634/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:23.241 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:23.499 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:23.499 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:23.499 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:23.499 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:23.499 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:23.757 [641/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:23.757 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:23.757 [643/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:24.015 [644/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:24.015 [645/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:24.015 [646/707] Linking static target lib/librte_pipeline.a 00:02:24.015 [647/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:24.015 [648/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:24.015 [649/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:24.273 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:24.273 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:24.532 [652/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:24.532 [653/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:24.532 [654/707] Linking target app/dpdk-graph 00:02:24.532 [655/707] Linking target app/dpdk-pdump 00:02:24.532 [656/707] Linking target app/dpdk-proc-info 00:02:24.789 [657/707] Linking target app/dpdk-test-acl 00:02:24.789 [658/707] Linking target app/dpdk-test-cmdline 00:02:24.789 [659/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:24.789 [660/707] Linking target app/dpdk-test-compress-perf 00:02:25.048 [661/707] Linking target app/dpdk-test-crypto-perf 00:02:25.048 [662/707] Linking target app/dpdk-test-dma-perf 00:02:25.048 [663/707] Linking target app/dpdk-test-eventdev 00:02:25.048 [664/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:25.048 [665/707] Linking target app/dpdk-test-fib 00:02:25.306 [666/707] Linking target app/dpdk-test-flow-perf 00:02:25.306 [667/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:25.306 [668/707] Linking target app/dpdk-test-gpudev 00:02:25.306 [669/707] Linking target app/dpdk-test-mldev 00:02:25.565 [670/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:25.565 [671/707] Linking target app/dpdk-test-bbdev 00:02:25.565 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:25.824 [673/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:25.824 [674/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:26.082 [675/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:26.342 [676/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:26.342 [677/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:26.342 [678/707] Linking target app/dpdk-test-pipeline 00:02:26.600 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:26.858 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:26.858 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:26.858 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:26.858 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:27.116 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:27.373 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:27.373 [686/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.373 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:27.373 [688/707] Linking target lib/librte_pipeline.so.24.0 00:02:27.373 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:27.631 [690/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:27.631 [691/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:27.889 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:28.146 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:28.146 [694/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:28.403 [695/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:28.403 [696/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:28.403 [697/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:28.403 [698/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:28.403 [699/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:28.687 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:28.687 [701/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:28.943 [702/707] Linking target app/dpdk-test-regex 00:02:28.943 [703/707] Linking target app/dpdk-test-sad 00:02:29.201 [704/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:29.201 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:29.765 [706/707] Linking target app/dpdk-testpmd 00:02:29.765 [707/707] Linking target app/dpdk-test-security-perf 00:02:29.765 11:44:28 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:29.765 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:29.765 [0/1] Installing files. 00:02:30.334 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:30.334 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.335 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.336 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.337 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.338 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.338 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.338 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.339 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.906 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.906 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.906 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.906 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.906 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.906 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.906 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:30.907 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.907 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.908 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.909 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.909 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:30.909 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:30.910 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:30.910 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:30.910 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:30.910 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:30.910 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:30.910 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:30.910 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:30.910 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:30.910 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:30.910 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:30.910 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:30.910 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:30.910 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:30.910 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:30.910 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:30.910 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:30.910 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:30.910 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:30.910 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:30.910 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:30.910 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:30.910 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:30.910 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:30.910 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:30.910 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:30.910 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:30.910 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:30.910 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:30.910 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:30.910 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:30.910 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:30.910 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:30.910 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:30.910 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:30.910 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:30.910 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:30.910 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:30.910 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:30.910 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:30.910 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:30.910 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:30.910 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:30.910 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:30.910 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:30.910 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:30.910 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:30.910 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:30.910 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:30.910 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:30.910 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:30.910 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:30.910 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:30.910 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:30.910 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:30.910 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:30.910 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:30.910 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:30.910 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:30.910 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:30.910 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:30.910 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:30.910 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:30.910 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:30.910 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:30.910 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:30.910 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:30.910 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:30.910 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:30.910 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:30.910 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:30.910 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:30.910 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:30.910 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:30.910 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:30.910 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:30.910 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:30.910 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:30.910 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:30.910 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:30.910 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:30.910 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:30.910 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:30.910 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:30.910 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:30.910 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:30.910 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:30.910 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:30.910 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:30.910 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:30.910 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:30.910 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:30.910 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:30.910 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:30.910 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:30.910 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:30.910 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:30.910 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:30.910 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:30.910 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:30.910 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:30.910 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:30.910 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:30.910 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:30.910 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:30.910 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:30.910 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:30.910 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:30.910 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:30.910 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:30.910 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:30.910 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:30.910 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:30.910 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:30.910 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:30.910 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:30.910 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:30.911 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:30.911 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:30.911 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:30.911 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:30.911 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:30.911 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:30.911 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:30.911 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:30.911 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:30.911 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:30.911 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:30.911 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:30.911 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:30.911 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:30.911 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:31.169 11:44:29 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:31.169 11:44:29 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:31.169 11:44:29 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:31.169 11:44:29 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:31.169 00:02:31.169 real 0m51.455s 00:02:31.169 user 6m1.463s 00:02:31.169 sys 0m55.876s 00:02:31.169 11:44:29 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:31.169 ************************************ 00:02:31.169 END TEST build_native_dpdk 00:02:31.169 ************************************ 00:02:31.169 11:44:29 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:31.169 11:44:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:31.169 11:44:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:31.169 11:44:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:31.169 11:44:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:31.169 11:44:29 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:31.169 11:44:29 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:31.169 11:44:29 -- common/autobuild_common.sh@413 -- $ run_test unittest_build _unittest_build 00:02:31.169 11:44:29 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:31.169 11:44:29 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:31.169 11:44:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.169 ************************************ 00:02:31.169 START TEST unittest_build 00:02:31.169 ************************************ 00:02:31.169 11:44:29 unittest_build -- common/autotest_common.sh@1121 -- $ _unittest_build 00:02:31.169 11:44:29 unittest_build -- common/autobuild_common.sh@404 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared 00:02:31.169 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:31.169 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:31.169 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:31.169 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:31.426 Using 'verbs' RDMA provider 00:02:44.553 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:59.502 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:59.502 Creating mk/config.mk...done. 00:02:59.502 Creating mk/cc.flags.mk...done. 00:02:59.502 Type 'make' to build. 00:02:59.502 11:44:56 unittest_build -- common/autobuild_common.sh@405 -- $ make -j10 00:02:59.502 make[1]: Nothing to be done for 'all'. 00:03:17.576 CC lib/log/log_flags.o 00:03:17.576 CC lib/log/log.o 00:03:17.576 CC lib/log/log_deprecated.o 00:03:17.576 CC lib/ut_mock/mock.o 00:03:17.576 CC lib/ut/ut.o 00:03:17.576 LIB libspdk_log.a 00:03:17.576 LIB libspdk_ut.a 00:03:17.576 LIB libspdk_ut_mock.a 00:03:17.576 CC lib/ioat/ioat.o 00:03:17.576 CXX lib/trace_parser/trace.o 00:03:17.576 CC lib/util/base64.o 00:03:17.576 CC lib/util/bit_array.o 00:03:17.576 CC lib/util/cpuset.o 00:03:17.576 CC lib/util/crc16.o 00:03:17.576 CC lib/util/crc32.o 00:03:17.576 CC lib/dma/dma.o 00:03:17.576 CC lib/util/crc32c.o 00:03:17.576 CC lib/vfio_user/host/vfio_user_pci.o 00:03:17.576 CC lib/util/crc32_ieee.o 00:03:17.576 CC lib/util/crc64.o 00:03:17.576 CC lib/util/dif.o 00:03:17.576 CC lib/util/fd.o 00:03:17.576 CC lib/util/file.o 00:03:17.576 LIB libspdk_dma.a 00:03:17.576 CC lib/vfio_user/host/vfio_user.o 00:03:17.576 CC lib/util/hexlify.o 00:03:17.576 CC lib/util/iov.o 00:03:17.576 CC lib/util/math.o 00:03:17.576 LIB libspdk_ioat.a 00:03:17.576 CC lib/util/pipe.o 00:03:17.576 CC lib/util/strerror_tls.o 00:03:17.576 CC lib/util/string.o 00:03:17.576 CC lib/util/uuid.o 00:03:17.576 CC lib/util/fd_group.o 00:03:17.576 CC lib/util/xor.o 00:03:17.576 LIB libspdk_vfio_user.a 00:03:17.576 CC lib/util/zipf.o 00:03:17.576 LIB libspdk_util.a 00:03:17.576 CC lib/json/json_parse.o 00:03:17.576 CC lib/json/json_util.o 00:03:17.576 CC lib/idxd/idxd.o 00:03:17.576 CC lib/json/json_write.o 00:03:17.576 CC lib/conf/conf.o 00:03:17.576 CC lib/idxd/idxd_user.o 00:03:17.576 CC lib/rdma/common.o 00:03:17.576 CC lib/vmd/vmd.o 00:03:17.576 CC lib/env_dpdk/env.o 00:03:17.576 LIB libspdk_trace_parser.a 00:03:17.576 CC lib/env_dpdk/memory.o 00:03:17.576 LIB libspdk_conf.a 00:03:17.576 CC lib/env_dpdk/pci.o 00:03:17.576 CC lib/env_dpdk/init.o 00:03:17.576 CC lib/env_dpdk/threads.o 00:03:17.576 CC lib/env_dpdk/pci_ioat.o 00:03:17.576 CC lib/rdma/rdma_verbs.o 00:03:17.576 LIB libspdk_json.a 00:03:17.576 CC lib/env_dpdk/pci_virtio.o 00:03:17.576 CC lib/env_dpdk/pci_vmd.o 00:03:17.576 CC lib/env_dpdk/pci_idxd.o 00:03:17.576 CC lib/env_dpdk/pci_event.o 00:03:17.836 LIB libspdk_rdma.a 00:03:17.836 CC lib/env_dpdk/sigbus_handler.o 00:03:17.836 CC lib/env_dpdk/pci_dpdk.o 00:03:17.836 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:17.836 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:17.836 CC lib/vmd/led.o 00:03:17.836 LIB libspdk_idxd.a 00:03:18.095 CC lib/jsonrpc/jsonrpc_server.o 00:03:18.095 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:18.095 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:18.095 CC lib/jsonrpc/jsonrpc_client.o 00:03:18.095 LIB libspdk_vmd.a 00:03:18.354 LIB libspdk_jsonrpc.a 00:03:18.613 CC lib/rpc/rpc.o 00:03:18.613 LIB libspdk_rpc.a 00:03:18.613 LIB libspdk_env_dpdk.a 00:03:18.872 CC lib/keyring/keyring.o 00:03:18.872 CC lib/keyring/keyring_rpc.o 00:03:18.872 CC lib/notify/notify.o 00:03:18.872 CC lib/notify/notify_rpc.o 00:03:18.872 CC lib/trace/trace_flags.o 00:03:18.872 CC lib/trace/trace.o 00:03:18.872 CC lib/trace/trace_rpc.o 00:03:19.131 LIB libspdk_notify.a 00:03:19.131 LIB libspdk_keyring.a 00:03:19.131 LIB libspdk_trace.a 00:03:19.396 CC lib/sock/sock.o 00:03:19.396 CC lib/sock/sock_rpc.o 00:03:19.396 CC lib/thread/thread.o 00:03:19.396 CC lib/thread/iobuf.o 00:03:20.003 LIB libspdk_sock.a 00:03:20.261 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:20.261 CC lib/nvme/nvme_ctrlr.o 00:03:20.261 CC lib/nvme/nvme_fabric.o 00:03:20.261 CC lib/nvme/nvme_ns_cmd.o 00:03:20.261 CC lib/nvme/nvme_ns.o 00:03:20.261 CC lib/nvme/nvme_pcie_common.o 00:03:20.261 CC lib/nvme/nvme_pcie.o 00:03:20.261 CC lib/nvme/nvme.o 00:03:20.261 CC lib/nvme/nvme_qpair.o 00:03:20.828 CC lib/nvme/nvme_quirks.o 00:03:20.828 CC lib/nvme/nvme_transport.o 00:03:20.828 CC lib/nvme/nvme_discovery.o 00:03:20.828 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:21.086 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:21.086 CC lib/nvme/nvme_tcp.o 00:03:21.086 CC lib/nvme/nvme_opal.o 00:03:21.086 CC lib/nvme/nvme_io_msg.o 00:03:21.086 LIB libspdk_thread.a 00:03:21.345 CC lib/nvme/nvme_poll_group.o 00:03:21.345 CC lib/nvme/nvme_zns.o 00:03:21.345 CC lib/nvme/nvme_stubs.o 00:03:21.604 CC lib/accel/accel.o 00:03:21.604 CC lib/blob/blobstore.o 00:03:21.604 CC lib/nvme/nvme_auth.o 00:03:21.604 CC lib/blob/request.o 00:03:21.604 CC lib/init/json_config.o 00:03:21.604 CC lib/blob/zeroes.o 00:03:21.862 CC lib/blob/blob_bs_dev.o 00:03:21.862 CC lib/init/subsystem.o 00:03:21.862 CC lib/nvme/nvme_cuse.o 00:03:21.862 CC lib/nvme/nvme_rdma.o 00:03:21.862 CC lib/init/subsystem_rpc.o 00:03:22.121 CC lib/init/rpc.o 00:03:22.121 CC lib/accel/accel_rpc.o 00:03:22.121 CC lib/virtio/virtio.o 00:03:22.121 LIB libspdk_init.a 00:03:22.379 CC lib/virtio/virtio_vhost_user.o 00:03:22.379 CC lib/accel/accel_sw.o 00:03:22.379 CC lib/virtio/virtio_vfio_user.o 00:03:22.379 CC lib/virtio/virtio_pci.o 00:03:22.637 CC lib/event/app.o 00:03:22.637 CC lib/event/reactor.o 00:03:22.637 CC lib/event/log_rpc.o 00:03:22.637 CC lib/event/app_rpc.o 00:03:22.637 LIB libspdk_accel.a 00:03:22.637 CC lib/event/scheduler_static.o 00:03:22.637 LIB libspdk_virtio.a 00:03:22.893 CC lib/bdev/bdev.o 00:03:22.893 CC lib/bdev/bdev_rpc.o 00:03:22.893 CC lib/bdev/bdev_zone.o 00:03:22.893 CC lib/bdev/scsi_nvme.o 00:03:22.893 CC lib/bdev/part.o 00:03:23.151 LIB libspdk_event.a 00:03:23.409 LIB libspdk_nvme.a 00:03:25.310 LIB libspdk_blob.a 00:03:25.310 CC lib/blobfs/blobfs.o 00:03:25.310 CC lib/blobfs/tree.o 00:03:25.310 CC lib/lvol/lvol.o 00:03:25.568 LIB libspdk_bdev.a 00:03:25.825 CC lib/nvmf/ctrlr.o 00:03:25.825 CC lib/nvmf/ctrlr_discovery.o 00:03:25.825 CC lib/nvmf/ctrlr_bdev.o 00:03:25.825 CC lib/nvmf/subsystem.o 00:03:25.825 CC lib/nvmf/nvmf.o 00:03:25.825 CC lib/scsi/dev.o 00:03:25.825 CC lib/nbd/nbd.o 00:03:25.825 CC lib/ftl/ftl_core.o 00:03:26.083 CC lib/scsi/lun.o 00:03:26.341 CC lib/ftl/ftl_init.o 00:03:26.341 CC lib/nbd/nbd_rpc.o 00:03:26.341 LIB libspdk_blobfs.a 00:03:26.341 CC lib/nvmf/nvmf_rpc.o 00:03:26.341 LIB libspdk_lvol.a 00:03:26.341 CC lib/nvmf/transport.o 00:03:26.341 CC lib/scsi/port.o 00:03:26.341 CC lib/scsi/scsi.o 00:03:26.599 CC lib/ftl/ftl_layout.o 00:03:26.599 LIB libspdk_nbd.a 00:03:26.599 CC lib/ftl/ftl_debug.o 00:03:26.599 CC lib/ftl/ftl_io.o 00:03:26.599 CC lib/scsi/scsi_bdev.o 00:03:26.599 CC lib/scsi/scsi_pr.o 00:03:26.599 CC lib/scsi/scsi_rpc.o 00:03:26.856 CC lib/scsi/task.o 00:03:26.856 CC lib/ftl/ftl_sb.o 00:03:26.856 CC lib/nvmf/tcp.o 00:03:26.856 CC lib/nvmf/stubs.o 00:03:26.856 CC lib/nvmf/mdns_server.o 00:03:27.114 CC lib/nvmf/rdma.o 00:03:27.114 CC lib/ftl/ftl_l2p.o 00:03:27.114 CC lib/nvmf/auth.o 00:03:27.114 LIB libspdk_scsi.a 00:03:27.114 CC lib/ftl/ftl_l2p_flat.o 00:03:27.114 CC lib/ftl/ftl_nv_cache.o 00:03:27.114 CC lib/ftl/ftl_band.o 00:03:27.372 CC lib/ftl/ftl_band_ops.o 00:03:27.372 CC lib/ftl/ftl_writer.o 00:03:27.372 CC lib/iscsi/conn.o 00:03:27.631 CC lib/iscsi/init_grp.o 00:03:27.631 CC lib/vhost/vhost.o 00:03:27.631 CC lib/vhost/vhost_rpc.o 00:03:27.631 CC lib/iscsi/iscsi.o 00:03:27.631 CC lib/vhost/vhost_scsi.o 00:03:27.889 CC lib/iscsi/md5.o 00:03:27.889 CC lib/iscsi/param.o 00:03:28.147 CC lib/vhost/vhost_blk.o 00:03:28.147 CC lib/vhost/rte_vhost_user.o 00:03:28.147 CC lib/iscsi/portal_grp.o 00:03:28.147 CC lib/iscsi/tgt_node.o 00:03:28.405 CC lib/ftl/ftl_rq.o 00:03:28.405 CC lib/ftl/ftl_reloc.o 00:03:28.405 CC lib/ftl/ftl_l2p_cache.o 00:03:28.405 CC lib/iscsi/iscsi_subsystem.o 00:03:28.663 CC lib/iscsi/iscsi_rpc.o 00:03:28.663 CC lib/iscsi/task.o 00:03:28.663 CC lib/ftl/ftl_p2l.o 00:03:28.663 CC lib/ftl/mngt/ftl_mngt.o 00:03:28.921 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:28.921 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:28.921 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:28.921 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.179 LIB libspdk_vhost.a 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.179 LIB libspdk_iscsi.a 00:03:29.179 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.179 CC lib/ftl/utils/ftl_conf.o 00:03:29.436 CC lib/ftl/utils/ftl_md.o 00:03:29.436 CC lib/ftl/utils/ftl_mempool.o 00:03:29.436 CC lib/ftl/utils/ftl_bitmap.o 00:03:29.436 CC lib/ftl/utils/ftl_property.o 00:03:29.436 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:29.436 LIB libspdk_nvmf.a 00:03:29.436 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:29.436 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:29.436 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:29.436 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:29.436 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:29.436 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:29.694 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:29.694 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:29.694 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:29.694 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:29.694 CC lib/ftl/base/ftl_base_dev.o 00:03:29.694 CC lib/ftl/base/ftl_base_bdev.o 00:03:29.694 CC lib/ftl/ftl_trace.o 00:03:29.953 LIB libspdk_ftl.a 00:03:30.518 CC module/env_dpdk/env_dpdk_rpc.o 00:03:30.518 CC module/accel/dsa/accel_dsa.o 00:03:30.518 CC module/sock/posix/posix.o 00:03:30.518 CC module/keyring/linux/keyring.o 00:03:30.518 CC module/keyring/file/keyring.o 00:03:30.518 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:30.518 CC module/accel/ioat/accel_ioat.o 00:03:30.518 CC module/blob/bdev/blob_bdev.o 00:03:30.518 CC module/accel/error/accel_error.o 00:03:30.518 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:30.518 LIB libspdk_env_dpdk_rpc.a 00:03:30.518 CC module/accel/ioat/accel_ioat_rpc.o 00:03:30.518 CC module/keyring/linux/keyring_rpc.o 00:03:30.518 CC module/keyring/file/keyring_rpc.o 00:03:30.518 LIB libspdk_scheduler_dpdk_governor.a 00:03:30.776 CC module/accel/error/accel_error_rpc.o 00:03:30.776 LIB libspdk_scheduler_dynamic.a 00:03:30.776 LIB libspdk_accel_ioat.a 00:03:30.776 CC module/accel/dsa/accel_dsa_rpc.o 00:03:30.776 LIB libspdk_keyring_linux.a 00:03:30.776 LIB libspdk_blob_bdev.a 00:03:30.776 LIB libspdk_keyring_file.a 00:03:30.776 CC module/scheduler/gscheduler/gscheduler.o 00:03:30.776 CC module/accel/iaa/accel_iaa.o 00:03:30.776 CC module/accel/iaa/accel_iaa_rpc.o 00:03:30.776 LIB libspdk_accel_error.a 00:03:30.776 LIB libspdk_accel_dsa.a 00:03:31.034 LIB libspdk_scheduler_gscheduler.a 00:03:31.034 CC module/bdev/gpt/gpt.o 00:03:31.034 CC module/blobfs/bdev/blobfs_bdev.o 00:03:31.034 CC module/bdev/error/vbdev_error.o 00:03:31.034 CC module/bdev/delay/vbdev_delay.o 00:03:31.034 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:31.034 CC module/bdev/lvol/vbdev_lvol.o 00:03:31.034 LIB libspdk_accel_iaa.a 00:03:31.034 CC module/bdev/malloc/bdev_malloc.o 00:03:31.034 CC module/bdev/null/bdev_null.o 00:03:31.291 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:31.292 CC module/bdev/gpt/vbdev_gpt.o 00:03:31.292 CC module/bdev/nvme/bdev_nvme.o 00:03:31.292 CC module/bdev/error/vbdev_error_rpc.o 00:03:31.292 LIB libspdk_sock_posix.a 00:03:31.292 LIB libspdk_blobfs_bdev.a 00:03:31.292 LIB libspdk_bdev_delay.a 00:03:31.292 CC module/bdev/null/bdev_null_rpc.o 00:03:31.549 CC module/bdev/passthru/vbdev_passthru.o 00:03:31.549 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:31.549 LIB libspdk_bdev_gpt.a 00:03:31.549 LIB libspdk_bdev_error.a 00:03:31.549 CC module/bdev/raid/bdev_raid.o 00:03:31.549 CC module/bdev/raid/bdev_raid_rpc.o 00:03:31.549 CC module/bdev/split/vbdev_split.o 00:03:31.549 CC module/bdev/split/vbdev_split_rpc.o 00:03:31.549 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:31.549 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:31.549 LIB libspdk_bdev_null.a 00:03:31.549 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:31.549 LIB libspdk_bdev_malloc.a 00:03:31.807 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:31.807 LIB libspdk_bdev_split.a 00:03:31.807 CC module/bdev/ftl/bdev_ftl.o 00:03:31.807 CC module/bdev/aio/bdev_aio.o 00:03:31.807 CC module/bdev/aio/bdev_aio_rpc.o 00:03:31.807 LIB libspdk_bdev_passthru.a 00:03:31.807 CC module/bdev/iscsi/bdev_iscsi.o 00:03:31.807 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:31.807 LIB libspdk_bdev_zone_block.a 00:03:32.065 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:32.065 LIB libspdk_bdev_lvol.a 00:03:32.065 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:32.065 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:32.065 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:32.065 CC module/bdev/raid/bdev_raid_sb.o 00:03:32.065 CC module/bdev/raid/raid0.o 00:03:32.324 LIB libspdk_bdev_aio.a 00:03:32.324 CC module/bdev/raid/raid1.o 00:03:32.324 LIB libspdk_bdev_ftl.a 00:03:32.324 CC module/bdev/raid/concat.o 00:03:32.324 LIB libspdk_bdev_iscsi.a 00:03:32.324 CC module/bdev/raid/raid5f.o 00:03:32.324 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:32.324 CC module/bdev/nvme/nvme_rpc.o 00:03:32.324 CC module/bdev/nvme/bdev_mdns_client.o 00:03:32.324 CC module/bdev/nvme/vbdev_opal.o 00:03:32.583 LIB libspdk_bdev_virtio.a 00:03:32.583 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:32.583 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:32.842 LIB libspdk_bdev_raid.a 00:03:33.777 LIB libspdk_bdev_nvme.a 00:03:34.036 CC module/event/subsystems/iobuf/iobuf.o 00:03:34.036 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:34.036 CC module/event/subsystems/keyring/keyring.o 00:03:34.036 CC module/event/subsystems/vmd/vmd.o 00:03:34.036 CC module/event/subsystems/sock/sock.o 00:03:34.036 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:34.036 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:34.036 CC module/event/subsystems/scheduler/scheduler.o 00:03:34.036 LIB libspdk_event_vhost_blk.a 00:03:34.036 LIB libspdk_event_keyring.a 00:03:34.036 LIB libspdk_event_scheduler.a 00:03:34.295 LIB libspdk_event_sock.a 00:03:34.295 LIB libspdk_event_vmd.a 00:03:34.295 LIB libspdk_event_iobuf.a 00:03:34.295 CC module/event/subsystems/accel/accel.o 00:03:34.553 LIB libspdk_event_accel.a 00:03:34.811 CC module/event/subsystems/bdev/bdev.o 00:03:35.070 LIB libspdk_event_bdev.a 00:03:35.335 CC module/event/subsystems/scsi/scsi.o 00:03:35.335 CC module/event/subsystems/nbd/nbd.o 00:03:35.335 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:35.336 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:35.336 LIB libspdk_event_nbd.a 00:03:35.336 LIB libspdk_event_scsi.a 00:03:35.594 LIB libspdk_event_nvmf.a 00:03:35.594 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.594 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.853 LIB libspdk_event_vhost_scsi.a 00:03:35.853 LIB libspdk_event_iscsi.a 00:03:36.111 CC app/trace_record/trace_record.o 00:03:36.111 CXX app/trace/trace.o 00:03:36.111 CC app/iscsi_tgt/iscsi_tgt.o 00:03:36.111 CC app/nvmf_tgt/nvmf_main.o 00:03:36.111 CC examples/nvme/hello_world/hello_world.o 00:03:36.111 CC examples/ioat/perf/perf.o 00:03:36.111 CC examples/accel/perf/accel_perf.o 00:03:36.111 CC examples/bdev/hello_world/hello_bdev.o 00:03:36.111 CC examples/blob/hello_world/hello_blob.o 00:03:36.370 CC test/accel/dif/dif.o 00:03:36.370 LINK iscsi_tgt 00:03:36.370 LINK nvmf_tgt 00:03:36.370 LINK spdk_trace_record 00:03:36.370 LINK ioat_perf 00:03:36.628 LINK hello_world 00:03:36.628 LINK hello_blob 00:03:36.628 LINK hello_bdev 00:03:36.628 LINK spdk_trace 00:03:36.887 LINK accel_perf 00:03:36.887 LINK dif 00:03:36.887 CC examples/blob/cli/blobcli.o 00:03:37.145 CC examples/bdev/bdevperf/bdevperf.o 00:03:37.145 CC examples/ioat/verify/verify.o 00:03:37.404 LINK verify 00:03:37.404 LINK blobcli 00:03:37.663 CC examples/nvme/reconnect/reconnect.o 00:03:37.921 CC app/spdk_tgt/spdk_tgt.o 00:03:37.921 LINK bdevperf 00:03:37.921 LINK reconnect 00:03:38.179 CC app/spdk_lspci/spdk_lspci.o 00:03:38.179 LINK spdk_tgt 00:03:38.179 LINK spdk_lspci 00:03:39.113 CC app/spdk_nvme_perf/perf.o 00:03:39.113 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:39.371 CC examples/sock/hello_world/hello_sock.o 00:03:39.371 CC examples/vmd/lsvmd/lsvmd.o 00:03:39.371 LINK lsvmd 00:03:39.630 LINK hello_sock 00:03:39.630 LINK nvme_manage 00:03:39.630 CC examples/vmd/led/led.o 00:03:39.935 CC app/spdk_nvme_identify/identify.o 00:03:39.935 LINK led 00:03:39.935 LINK spdk_nvme_perf 00:03:40.193 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.193 CC app/spdk_top/spdk_top.o 00:03:40.452 CC test/app/bdev_svc/bdev_svc.o 00:03:40.452 LINK spdk_nvme_discover 00:03:40.452 LINK bdev_svc 00:03:40.710 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.710 CC examples/nvme/arbitration/arbitration.o 00:03:40.710 LINK spdk_nvme_identify 00:03:40.969 CC examples/nvme/hotplug/hotplug.o 00:03:40.969 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:41.228 LINK arbitration 00:03:41.228 LINK nvme_fuzz 00:03:41.228 CC examples/nvme/abort/abort.o 00:03:41.228 LINK cmb_copy 00:03:41.228 LINK hotplug 00:03:41.228 LINK spdk_top 00:03:41.485 CC test/app/histogram_perf/histogram_perf.o 00:03:41.485 LINK histogram_perf 00:03:41.485 LINK abort 00:03:41.485 CC test/bdev/bdevio/bdevio.o 00:03:42.056 CC examples/nvmf/nvmf/nvmf.o 00:03:42.056 CC app/vhost/vhost.o 00:03:42.056 LINK bdevio 00:03:42.337 CC app/spdk_dd/spdk_dd.o 00:03:42.337 LINK nvmf 00:03:42.337 LINK vhost 00:03:42.337 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:42.337 CC app/fio/nvme/fio_plugin.o 00:03:42.337 CC examples/util/zipf/zipf.o 00:03:42.337 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:42.606 LINK zipf 00:03:42.864 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.864 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:42.864 LINK spdk_dd 00:03:42.864 LINK pmr_persistence 00:03:43.123 LINK spdk_nvme 00:03:43.123 CC test/app/jsoncat/jsoncat.o 00:03:43.382 LINK jsoncat 00:03:43.382 LINK vhost_fuzz 00:03:43.950 TEST_HEADER include/spdk/accel.h 00:03:43.950 TEST_HEADER include/spdk/accel_module.h 00:03:43.950 TEST_HEADER include/spdk/assert.h 00:03:43.950 TEST_HEADER include/spdk/barrier.h 00:03:43.950 TEST_HEADER include/spdk/base64.h 00:03:43.950 TEST_HEADER include/spdk/bdev.h 00:03:43.950 TEST_HEADER include/spdk/bdev_module.h 00:03:43.950 TEST_HEADER include/spdk/bdev_zone.h 00:03:43.950 TEST_HEADER include/spdk/bit_array.h 00:03:43.950 TEST_HEADER include/spdk/bit_pool.h 00:03:43.950 TEST_HEADER include/spdk/blob.h 00:03:43.950 TEST_HEADER include/spdk/blob_bdev.h 00:03:43.950 TEST_HEADER include/spdk/blobfs.h 00:03:43.950 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:43.950 TEST_HEADER include/spdk/conf.h 00:03:43.950 TEST_HEADER include/spdk/config.h 00:03:43.950 TEST_HEADER include/spdk/cpuset.h 00:03:43.950 TEST_HEADER include/spdk/crc16.h 00:03:43.950 CC test/blobfs/mkfs/mkfs.o 00:03:43.950 TEST_HEADER include/spdk/crc32.h 00:03:43.950 TEST_HEADER include/spdk/crc64.h 00:03:43.950 TEST_HEADER include/spdk/dif.h 00:03:43.950 TEST_HEADER include/spdk/dma.h 00:03:43.950 TEST_HEADER include/spdk/endian.h 00:03:43.950 TEST_HEADER include/spdk/env.h 00:03:43.950 TEST_HEADER include/spdk/env_dpdk.h 00:03:43.950 TEST_HEADER include/spdk/event.h 00:03:43.950 TEST_HEADER include/spdk/fd.h 00:03:43.950 TEST_HEADER include/spdk/fd_group.h 00:03:43.950 TEST_HEADER include/spdk/file.h 00:03:43.950 TEST_HEADER include/spdk/ftl.h 00:03:43.950 TEST_HEADER include/spdk/gpt_spec.h 00:03:43.950 TEST_HEADER include/spdk/hexlify.h 00:03:44.209 TEST_HEADER include/spdk/histogram_data.h 00:03:44.209 TEST_HEADER include/spdk/idxd.h 00:03:44.209 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.209 TEST_HEADER include/spdk/init.h 00:03:44.209 TEST_HEADER include/spdk/ioat.h 00:03:44.209 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.209 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.209 TEST_HEADER include/spdk/json.h 00:03:44.209 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.209 TEST_HEADER include/spdk/keyring.h 00:03:44.209 TEST_HEADER include/spdk/keyring_module.h 00:03:44.209 TEST_HEADER include/spdk/likely.h 00:03:44.209 TEST_HEADER include/spdk/log.h 00:03:44.209 TEST_HEADER include/spdk/lvol.h 00:03:44.209 TEST_HEADER include/spdk/memory.h 00:03:44.209 TEST_HEADER include/spdk/mmio.h 00:03:44.209 TEST_HEADER include/spdk/nbd.h 00:03:44.209 TEST_HEADER include/spdk/notify.h 00:03:44.209 TEST_HEADER include/spdk/nvme.h 00:03:44.209 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.209 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.209 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.209 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.209 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.209 CC app/fio/bdev/fio_plugin.o 00:03:44.209 TEST_HEADER include/spdk/nvmf.h 00:03:44.209 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.209 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.209 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.209 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.209 TEST_HEADER include/spdk/opal.h 00:03:44.209 TEST_HEADER include/spdk/opal_spec.h 00:03:44.209 TEST_HEADER include/spdk/pci_ids.h 00:03:44.209 TEST_HEADER include/spdk/pipe.h 00:03:44.209 TEST_HEADER include/spdk/queue.h 00:03:44.209 TEST_HEADER include/spdk/reduce.h 00:03:44.209 TEST_HEADER include/spdk/rpc.h 00:03:44.209 TEST_HEADER include/spdk/scheduler.h 00:03:44.209 TEST_HEADER include/spdk/scsi.h 00:03:44.209 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.209 TEST_HEADER include/spdk/sock.h 00:03:44.209 TEST_HEADER include/spdk/stdinc.h 00:03:44.209 TEST_HEADER include/spdk/string.h 00:03:44.209 TEST_HEADER include/spdk/thread.h 00:03:44.209 TEST_HEADER include/spdk/trace.h 00:03:44.209 TEST_HEADER include/spdk/trace_parser.h 00:03:44.209 TEST_HEADER include/spdk/tree.h 00:03:44.209 TEST_HEADER include/spdk/ublk.h 00:03:44.209 TEST_HEADER include/spdk/util.h 00:03:44.209 TEST_HEADER include/spdk/uuid.h 00:03:44.209 TEST_HEADER include/spdk/version.h 00:03:44.209 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.209 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.209 TEST_HEADER include/spdk/vhost.h 00:03:44.209 TEST_HEADER include/spdk/vmd.h 00:03:44.209 TEST_HEADER include/spdk/xor.h 00:03:44.209 TEST_HEADER include/spdk/zipf.h 00:03:44.209 CXX test/cpp_headers/accel.o 00:03:44.209 LINK mkfs 00:03:44.209 LINK iscsi_fuzz 00:03:44.209 CXX test/cpp_headers/accel_module.o 00:03:44.209 CC examples/thread/thread/thread_ex.o 00:03:44.467 CXX test/cpp_headers/assert.o 00:03:44.467 CC test/app/stub/stub.o 00:03:44.725 LINK thread 00:03:44.725 LINK spdk_bdev 00:03:44.725 CXX test/cpp_headers/barrier.o 00:03:44.725 LINK stub 00:03:44.725 CXX test/cpp_headers/base64.o 00:03:44.983 CC test/dma/test_dma/test_dma.o 00:03:44.983 CXX test/cpp_headers/bdev.o 00:03:45.242 CXX test/cpp_headers/bdev_module.o 00:03:45.242 LINK test_dma 00:03:45.500 CXX test/cpp_headers/bdev_zone.o 00:03:45.500 CC test/event/event_perf/event_perf.o 00:03:45.500 CC test/env/mem_callbacks/mem_callbacks.o 00:03:45.500 LINK event_perf 00:03:45.500 CXX test/cpp_headers/bit_array.o 00:03:45.500 CC test/env/vtophys/vtophys.o 00:03:45.758 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:45.758 CXX test/cpp_headers/bit_pool.o 00:03:45.758 LINK vtophys 00:03:46.017 LINK env_dpdk_post_init 00:03:46.017 CXX test/cpp_headers/blob.o 00:03:46.017 LINK mem_callbacks 00:03:46.017 CC test/env/memory/memory_ut.o 00:03:46.017 CXX test/cpp_headers/blob_bdev.o 00:03:46.275 CC test/event/reactor/reactor.o 00:03:46.275 CC test/env/pci/pci_ut.o 00:03:46.533 CXX test/cpp_headers/blobfs.o 00:03:46.533 LINK reactor 00:03:46.533 CC test/event/reactor_perf/reactor_perf.o 00:03:46.533 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.792 LINK reactor_perf 00:03:46.792 LINK pci_ut 00:03:46.792 CXX test/cpp_headers/conf.o 00:03:46.792 CXX test/cpp_headers/config.o 00:03:47.050 CXX test/cpp_headers/cpuset.o 00:03:47.050 CC test/event/app_repeat/app_repeat.o 00:03:47.050 LINK memory_ut 00:03:47.050 CXX test/cpp_headers/crc16.o 00:03:47.050 CXX test/cpp_headers/crc32.o 00:03:47.308 LINK app_repeat 00:03:47.308 CXX test/cpp_headers/crc64.o 00:03:47.308 CC test/event/scheduler/scheduler.o 00:03:47.308 CC test/rpc_client/rpc_client_test.o 00:03:47.308 CC test/nvme/aer/aer.o 00:03:47.308 CC test/lvol/esnap/esnap.o 00:03:47.308 CC test/nvme/reset/reset.o 00:03:47.308 CXX test/cpp_headers/dif.o 00:03:47.567 CXX test/cpp_headers/dma.o 00:03:47.567 LINK scheduler 00:03:47.567 LINK rpc_client_test 00:03:47.567 CXX test/cpp_headers/endian.o 00:03:47.567 LINK reset 00:03:47.567 LINK aer 00:03:47.825 CC test/nvme/sgl/sgl.o 00:03:47.825 CXX test/cpp_headers/env.o 00:03:47.825 CXX test/cpp_headers/env_dpdk.o 00:03:48.084 CC examples/idxd/perf/perf.o 00:03:48.084 CXX test/cpp_headers/event.o 00:03:48.084 CXX test/cpp_headers/fd.o 00:03:48.084 CC test/nvme/e2edp/nvme_dp.o 00:03:48.084 LINK sgl 00:03:48.084 CXX test/cpp_headers/fd_group.o 00:03:48.341 CC test/thread/poller_perf/poller_perf.o 00:03:48.341 LINK idxd_perf 00:03:48.341 CXX test/cpp_headers/file.o 00:03:48.341 LINK nvme_dp 00:03:48.341 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:48.341 LINK poller_perf 00:03:48.599 CXX test/cpp_headers/ftl.o 00:03:48.599 LINK histogram_ut 00:03:48.599 CXX test/cpp_headers/gpt_spec.o 00:03:48.856 CXX test/cpp_headers/hexlify.o 00:03:48.856 CXX test/cpp_headers/histogram_data.o 00:03:48.857 CC test/thread/lock/spdk_lock.o 00:03:48.857 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:48.857 CXX test/cpp_headers/idxd.o 00:03:49.114 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:49.114 CXX test/cpp_headers/idxd_spec.o 00:03:49.114 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:49.114 LINK interrupt_tgt 00:03:49.114 CXX test/cpp_headers/init.o 00:03:49.372 CC test/nvme/overhead/overhead.o 00:03:49.372 CC test/nvme/err_injection/err_injection.o 00:03:49.372 CXX test/cpp_headers/ioat.o 00:03:49.629 LINK err_injection 00:03:49.629 LINK overhead 00:03:49.629 CXX test/cpp_headers/ioat_spec.o 00:03:49.629 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:49.629 CXX test/cpp_headers/iscsi_spec.o 00:03:49.886 CXX test/cpp_headers/json.o 00:03:49.886 CXX test/cpp_headers/jsonrpc.o 00:03:50.143 CXX test/cpp_headers/keyring.o 00:03:50.143 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:50.143 CXX test/cpp_headers/keyring_module.o 00:03:50.401 CXX test/cpp_headers/likely.o 00:03:50.658 CC test/nvme/startup/startup.o 00:03:50.658 CC test/nvme/reserve/reserve.o 00:03:50.658 LINK blob_bdev_ut 00:03:50.658 LINK tree_ut 00:03:50.658 CXX test/cpp_headers/log.o 00:03:50.658 LINK spdk_lock 00:03:50.658 LINK startup 00:03:50.915 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:50.915 CXX test/cpp_headers/lvol.o 00:03:50.915 LINK reserve 00:03:50.915 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:50.915 CXX test/cpp_headers/memory.o 00:03:51.173 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:51.173 CXX test/cpp_headers/mmio.o 00:03:51.173 CXX test/cpp_headers/nbd.o 00:03:51.173 CXX test/cpp_headers/notify.o 00:03:51.430 CXX test/cpp_headers/nvme.o 00:03:51.430 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:51.687 CXX test/cpp_headers/nvme_intel.o 00:03:51.687 CXX test/cpp_headers/nvme_ocssd.o 00:03:51.687 LINK blobfs_bdev_ut 00:03:51.687 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:51.944 LINK accel_ut 00:03:51.944 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:51.944 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:52.201 CC test/nvme/simple_copy/simple_copy.o 00:03:52.201 CXX test/cpp_headers/nvme_spec.o 00:03:52.201 LINK scsi_nvme_ut 00:03:52.201 LINK simple_copy 00:03:52.459 CXX test/cpp_headers/nvme_zns.o 00:03:52.459 CXX test/cpp_headers/nvmf.o 00:03:52.459 LINK blobfs_async_ut 00:03:52.459 LINK blobfs_sync_ut 00:03:52.459 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:52.459 CXX test/cpp_headers/nvmf_cmd.o 00:03:52.717 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:52.717 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:52.717 CXX test/cpp_headers/nvmf_spec.o 00:03:52.975 CXX test/cpp_headers/nvmf_transport.o 00:03:52.975 CC test/unit/lib/event/app.c/app_ut.o 00:03:52.975 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:53.232 CXX test/cpp_headers/opal.o 00:03:53.232 LINK dma_ut 00:03:53.232 LINK gpt_ut 00:03:53.232 CC test/nvme/connect_stress/connect_stress.o 00:03:53.500 CXX test/cpp_headers/opal_spec.o 00:03:53.500 LINK esnap 00:03:53.500 LINK connect_stress 00:03:53.500 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:53.500 CXX test/cpp_headers/pci_ids.o 00:03:53.500 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:53.758 CXX test/cpp_headers/pipe.o 00:03:53.758 LINK ioat_ut 00:03:53.758 CXX test/cpp_headers/queue.o 00:03:54.016 CXX test/cpp_headers/reduce.o 00:03:54.016 LINK app_ut 00:03:54.016 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:54.016 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:54.016 CXX test/cpp_headers/rpc.o 00:03:54.274 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:54.274 CXX test/cpp_headers/scheduler.o 00:03:54.531 LINK jsonrpc_server_ut 00:03:54.531 CC test/nvme/boot_partition/boot_partition.o 00:03:54.531 CXX test/cpp_headers/scsi.o 00:03:54.531 LINK init_grp_ut 00:03:54.788 LINK boot_partition 00:03:54.788 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:54.788 CXX test/cpp_headers/scsi_spec.o 00:03:54.788 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:55.045 CXX test/cpp_headers/sock.o 00:03:55.045 CXX test/cpp_headers/stdinc.o 00:03:55.303 LINK bdev_ut 00:03:55.303 LINK conn_ut 00:03:55.303 CXX test/cpp_headers/string.o 00:03:55.303 LINK reactor_ut 00:03:55.303 CXX test/cpp_headers/thread.o 00:03:55.561 LINK param_ut 00:03:55.561 CXX test/cpp_headers/trace.o 00:03:55.561 CC test/unit/lib/log/log.c/log_ut.o 00:03:55.561 CXX test/cpp_headers/trace_parser.o 00:03:55.818 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:55.818 CC test/nvme/fused_ordering/fused_ordering.o 00:03:55.818 CC test/nvme/compliance/nvme_compliance.o 00:03:55.818 CXX test/cpp_headers/tree.o 00:03:55.818 CXX test/cpp_headers/ublk.o 00:03:55.818 LINK part_ut 00:03:55.818 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:55.818 LINK log_ut 00:03:55.818 LINK fused_ordering 00:03:56.076 CXX test/cpp_headers/util.o 00:03:56.076 LINK nvme_compliance 00:03:56.076 CXX test/cpp_headers/uuid.o 00:03:56.076 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:56.076 LINK json_parse_ut 00:03:56.333 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:56.333 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:56.590 LINK notify_ut 00:03:56.590 CXX test/cpp_headers/version.o 00:03:56.590 CXX test/cpp_headers/vfio_user_pci.o 00:03:56.847 CXX test/cpp_headers/vfio_user_spec.o 00:03:56.847 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:56.847 CXX test/cpp_headers/vhost.o 00:03:56.847 LINK json_util_ut 00:03:57.104 CC test/nvme/fdp/fdp.o 00:03:57.104 CXX test/cpp_headers/vmd.o 00:03:57.104 LINK doorbell_aers 00:03:57.104 CXX test/cpp_headers/xor.o 00:03:57.104 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:57.361 CXX test/cpp_headers/zipf.o 00:03:57.361 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:57.361 LINK fdp 00:03:57.618 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:57.618 LINK iscsi_ut 00:03:57.618 LINK vbdev_lvol_ut 00:03:57.875 LINK nvme_ut 00:03:57.875 LINK lvol_ut 00:03:57.875 LINK json_write_ut 00:03:57.875 LINK dev_ut 00:03:57.875 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:58.132 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:58.132 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:58.132 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:58.132 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:58.390 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:58.390 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:58.390 CC test/nvme/cuse/cuse.o 00:03:58.647 LINK scsi_ut 00:03:58.647 LINK blob_ut 00:03:58.904 LINK lun_ut 00:03:58.904 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:59.161 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:59.161 LINK portal_grp_ut 00:03:59.161 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:59.418 LINK bdev_zone_ut 00:03:59.418 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:59.676 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:59.676 LINK bdev_raid_sb_ut 00:03:59.676 LINK cuse 00:03:59.933 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:59.933 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:04:00.190 LINK scsi_pr_ut 00:04:00.190 LINK sock_ut 00:04:00.190 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:04:00.447 LINK scsi_bdev_ut 00:04:00.447 CC test/unit/lib/sock/posix.c/posix_ut.o 00:04:00.447 LINK bdev_raid_ut 00:04:00.705 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:04:00.705 LINK tgt_node_ut 00:04:00.705 LINK concat_ut 00:04:00.962 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:04:00.962 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:04:00.962 LINK raid1_ut 00:04:01.220 CC test/unit/lib/thread/thread.c/thread_ut.o 00:04:01.477 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:04:01.477 LINK nvme_ctrlr_ut 00:04:01.477 LINK vbdev_zone_block_ut 00:04:01.735 LINK posix_ut 00:04:01.735 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:04:01.992 CC test/unit/lib/util/base64.c/base64_ut.o 00:04:01.992 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:04:01.992 LINK tcp_ut 00:04:01.992 LINK bdev_ut 00:04:01.992 LINK base64_ut 00:04:02.250 LINK raid0_ut 00:04:02.250 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:04:02.250 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:04:02.533 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:04:02.533 LINK bit_array_ut 00:04:02.533 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:04:02.533 LINK crc16_ut 00:04:02.533 LINK cpuset_ut 00:04:02.832 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:04:02.832 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:04:02.832 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:04:02.832 LINK pci_event_ut 00:04:02.832 LINK crc32_ieee_ut 00:04:03.089 LINK nvme_ctrlr_cmd_ut 00:04:03.089 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:04:03.089 LINK ctrlr_discovery_ut 00:04:03.089 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:04:03.089 LINK crc32c_ut 00:04:03.346 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:04:03.346 LINK crc64_ut 00:04:03.346 LINK ctrlr_ut 00:04:03.346 LINK subsystem_ut 00:04:03.346 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:04:03.346 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:04:03.603 LINK thread_ut 00:04:03.603 CC test/unit/lib/util/dif.c/dif_ut.o 00:04:03.603 CC test/unit/lib/util/iov.c/iov_ut.o 00:04:03.860 LINK iobuf_ut 00:04:03.860 CC test/unit/lib/util/math.c/math_ut.o 00:04:03.860 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:04:03.860 LINK raid5f_ut 00:04:03.860 LINK math_ut 00:04:03.860 LINK iov_ut 00:04:04.118 CC test/unit/lib/util/string.c/string_ut.o 00:04:04.118 LINK ctrlr_bdev_ut 00:04:04.118 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:04:04.118 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:04:04.118 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:04:04.374 LINK nvme_ctrlr_ocssd_cmd_ut 00:04:04.374 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:04:04.374 LINK string_ut 00:04:04.374 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:04:04.631 LINK pipe_ut 00:04:04.631 CC test/unit/lib/util/xor.c/xor_ut.o 00:04:04.631 LINK dif_ut 00:04:04.631 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:04:04.887 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:04:04.887 LINK nvmf_ut 00:04:05.144 LINK xor_ut 00:04:05.144 LINK nvme_ns_ut 00:04:05.144 LINK auth_ut 00:04:05.433 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:04:05.433 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:04:05.433 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:04:05.690 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:04:05.948 LINK nvme_ns_ocssd_cmd_ut 00:04:06.206 LINK subsystem_ut 00:04:06.206 LINK nvme_ns_cmd_ut 00:04:06.206 LINK nvme_poll_group_ut 00:04:06.206 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:04:06.464 LINK rpc_ut 00:04:06.464 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:04:06.464 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:04:06.464 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:04:06.722 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:04:06.722 LINK nvme_qpair_ut 00:04:06.722 LINK keyring_ut 00:04:06.980 LINK nvme_pcie_ut 00:04:06.980 LINK rpc_ut 00:04:06.980 LINK nvme_quirks_ut 00:04:06.980 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:04:06.980 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:04:06.980 LINK idxd_user_ut 00:04:07.239 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:04:07.239 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:04:07.239 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:04:07.239 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:04:08.174 LINK nvme_io_msg_ut 00:04:08.174 LINK nvme_transport_ut 00:04:08.433 LINK transport_ut 00:04:08.433 LINK idxd_ut 00:04:08.433 LINK rdma_ut 00:04:08.433 LINK nvme_fabric_ut 00:04:08.433 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:04:08.691 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:04:08.691 CC test/unit/lib/rdma/common.c/common_ut.o 00:04:08.691 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:04:08.691 LINK nvme_pcie_common_ut 00:04:08.691 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:04:08.691 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:04:08.950 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:04:08.950 LINK bdev_nvme_ut 00:04:09.208 LINK ftl_l2p_ut 00:04:09.208 LINK common_ut 00:04:09.466 LINK nvme_opal_ut 00:04:09.466 CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o 00:04:09.466 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:04:09.466 LINK nvme_tcp_ut 00:04:09.466 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:04:09.725 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:04:09.725 LINK ftl_bitmap_ut 00:04:09.725 LINK ftl_io_ut 00:04:09.725 LINK vhost_ut 00:04:09.725 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:04:09.725 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:04:09.984 LINK ftl_mempool_ut 00:04:10.551 LINK ftl_mngt_ut 00:04:10.551 LINK ftl_band_ut 00:04:10.551 LINK ftl_p2l_ut 00:04:10.551 LINK nvme_cuse_ut 00:04:11.117 LINK nvme_rdma_ut 00:04:11.376 LINK ftl_layout_upgrade_ut 00:04:11.376 LINK ftl_sb_ut 00:04:11.635 ************************************ 00:04:11.635 END TEST unittest_build 00:04:11.635 ************************************ 00:04:11.635 00:04:11.635 real 1m40.406s 00:04:11.635 user 8m50.470s 00:04:11.635 sys 1m42.602s 00:04:11.635 11:46:10 unittest_build -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:11.635 11:46:10 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:04:11.635 11:46:10 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:11.635 11:46:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:11.635 11:46:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:11.635 11:46:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.635 11:46:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:11.635 11:46:10 -- pm/common@44 -- $ pid=2340 00:04:11.635 11:46:10 -- pm/common@50 -- $ kill -TERM 2340 00:04:11.635 11:46:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.635 11:46:10 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:11.635 11:46:10 -- pm/common@44 -- $ pid=2342 00:04:11.635 11:46:10 -- pm/common@50 -- $ kill -TERM 2342 00:04:11.635 11:46:10 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:11.635 11:46:10 -- nvmf/common.sh@7 -- # uname -s 00:04:11.635 11:46:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:11.635 11:46:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:11.635 11:46:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:11.635 11:46:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:11.635 11:46:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:11.635 11:46:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:11.636 11:46:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:11.636 11:46:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:11.636 11:46:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:11.636 11:46:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:11.636 11:46:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6793b6d3-8b4d-47ab-9fc7-60ed6c9ca5f2 00:04:11.636 11:46:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=6793b6d3-8b4d-47ab-9fc7-60ed6c9ca5f2 00:04:11.636 11:46:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:11.636 11:46:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:11.636 11:46:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:11.636 11:46:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:11.636 11:46:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:11.636 11:46:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:11.636 11:46:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:11.636 11:46:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:11.636 11:46:10 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.636 11:46:10 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.636 11:46:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.636 11:46:10 -- paths/export.sh@5 -- # export PATH 00:04:11.636 11:46:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:11.636 11:46:10 -- nvmf/common.sh@47 -- # : 0 00:04:11.636 11:46:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:11.636 11:46:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:11.636 11:46:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:11.636 11:46:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:11.636 11:46:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:11.636 11:46:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:11.636 11:46:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:11.636 11:46:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:11.636 11:46:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:11.636 11:46:10 -- spdk/autotest.sh@32 -- # uname -s 00:04:11.636 11:46:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:11.636 11:46:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:04:11.636 11:46:10 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.636 11:46:10 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:11.636 11:46:10 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:11.636 11:46:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:11.636 11:46:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:11.636 11:46:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:04:11.636 11:46:10 -- spdk/autotest.sh@48 -- # udevadm_pid=111679 00:04:11.636 11:46:10 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:04:11.636 11:46:10 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:11.636 11:46:10 -- pm/common@17 -- # local monitor 00:04:11.636 11:46:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.636 11:46:10 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:11.636 11:46:10 -- pm/common@25 -- # sleep 1 00:04:11.636 11:46:10 -- pm/common@21 -- # date +%s 00:04:11.636 11:46:10 -- pm/common@21 -- # date +%s 00:04:11.636 11:46:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721562370 00:04:11.636 11:46:10 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721562370 00:04:11.636 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721562370_collect-vmstat.pm.log 00:04:11.636 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721562370_collect-cpu-load.pm.log 00:04:13.013 11:46:11 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:13.013 11:46:11 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:13.013 11:46:11 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:13.013 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:04:13.013 11:46:11 -- spdk/autotest.sh@59 -- # create_test_list 00:04:13.013 11:46:11 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:13.013 11:46:11 -- common/autotest_common.sh@10 -- # set +x 00:04:13.013 11:46:11 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:13.013 11:46:11 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:13.013 11:46:11 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:13.013 11:46:11 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:13.013 11:46:11 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:13.013 11:46:11 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:13.013 11:46:11 -- common/autotest_common.sh@1451 -- # uname 00:04:13.013 11:46:11 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:13.013 11:46:11 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:13.013 11:46:11 -- common/autotest_common.sh@1471 -- # uname 00:04:13.013 11:46:11 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:13.013 11:46:11 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:13.013 11:46:11 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:13.013 11:46:11 -- spdk/autotest.sh@72 -- # hash lcov 00:04:13.013 11:46:11 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:13.013 11:46:11 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:13.013 --rc lcov_branch_coverage=1 00:04:13.013 --rc lcov_function_coverage=1 00:04:13.013 --rc genhtml_branch_coverage=1 00:04:13.013 --rc genhtml_function_coverage=1 00:04:13.013 --rc genhtml_legend=1 00:04:13.013 --rc geninfo_all_blocks=1 00:04:13.013 ' 00:04:13.013 11:46:11 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:13.013 --rc lcov_branch_coverage=1 00:04:13.013 --rc lcov_function_coverage=1 00:04:13.013 --rc genhtml_branch_coverage=1 00:04:13.013 --rc genhtml_function_coverage=1 00:04:13.013 --rc genhtml_legend=1 00:04:13.013 --rc geninfo_all_blocks=1 00:04:13.013 ' 00:04:13.013 11:46:11 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:13.013 --rc lcov_branch_coverage=1 00:04:13.013 --rc lcov_function_coverage=1 00:04:13.013 --rc genhtml_branch_coverage=1 00:04:13.013 --rc genhtml_function_coverage=1 00:04:13.013 --rc genhtml_legend=1 00:04:13.013 --rc geninfo_all_blocks=1 00:04:13.013 --no-external' 00:04:13.013 11:46:11 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:13.013 --rc lcov_branch_coverage=1 00:04:13.013 --rc lcov_function_coverage=1 00:04:13.013 --rc genhtml_branch_coverage=1 00:04:13.013 --rc genhtml_function_coverage=1 00:04:13.013 --rc genhtml_legend=1 00:04:13.013 --rc geninfo_all_blocks=1 00:04:13.013 --no-external' 00:04:13.013 11:46:11 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:13.013 lcov: LCOV version 1.15 00:04:13.013 11:46:11 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:18.280 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:18.280 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:04.948 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:04.948 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:04.949 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:04.949 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:04.949 11:46:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:04.949 11:46:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:04.949 11:46:59 -- common/autotest_common.sh@10 -- # set +x 00:05:04.949 11:46:59 -- spdk/autotest.sh@91 -- # rm -f 00:05:04.949 11:46:59 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:04.949 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:04.949 11:47:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:04.949 11:47:00 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:04.949 11:47:00 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:04.949 11:47:00 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:04.949 11:47:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.949 11:47:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:04.949 11:47:00 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:04.949 11:47:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.949 11:47:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.949 11:47:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:04.949 11:47:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:04.949 11:47:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:04.949 11:47:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:04.949 11:47:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:04.949 11:47:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:04.949 No valid GPT data, bailing 00:05:04.949 11:47:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:04.949 11:47:00 -- scripts/common.sh@391 -- # pt= 00:05:04.949 11:47:00 -- scripts/common.sh@392 -- # return 1 00:05:04.949 11:47:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:04.949 1+0 records in 00:05:04.949 1+0 records out 00:05:04.949 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00569979 s, 184 MB/s 00:05:04.949 11:47:00 -- spdk/autotest.sh@118 -- # sync 00:05:04.949 11:47:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:04.949 11:47:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:04.949 11:47:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:04.949 11:47:01 -- spdk/autotest.sh@124 -- # uname -s 00:05:04.949 11:47:01 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:04.949 11:47:01 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:04.949 11:47:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.949 11:47:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.949 11:47:01 -- common/autotest_common.sh@10 -- # set +x 00:05:04.949 ************************************ 00:05:04.949 START TEST setup.sh 00:05:04.949 ************************************ 00:05:04.949 11:47:01 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:04.949 * Looking for test storage... 00:05:04.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:04.949 11:47:01 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:04.949 11:47:01 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:04.949 11:47:01 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:04.949 11:47:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.949 11:47:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.949 11:47:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.949 ************************************ 00:05:04.949 START TEST acl 00:05:04.949 ************************************ 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:04.949 * Looking for test storage... 00:05:04.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:04.949 11:47:01 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.949 11:47:01 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.949 11:47:01 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:04.949 11:47:01 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:04.949 11:47:01 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:04.949 11:47:01 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:04.949 11:47:01 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:04.949 11:47:01 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.949 11:47:01 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:04.949 11:47:02 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.949 11:47:02 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.949 Hugepages 00:05:04.949 node hugesize free / total 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.949 00:05:04.949 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:04.949 11:47:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:05:04.950 11:47:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:04.950 11:47:02 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.950 11:47:02 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.950 11:47:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:04.950 ************************************ 00:05:04.950 START TEST denied 00:05:04.950 ************************************ 00:05:04.950 11:47:02 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:04.950 11:47:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:04.950 11:47:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:04.950 11:47:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:04.950 11:47:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.950 11:47:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:06.323 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.323 11:47:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:06.888 00:05:06.888 real 0m2.460s 00:05:06.888 user 0m0.510s 00:05:06.888 sys 0m1.987s 00:05:06.888 11:47:05 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:06.888 ************************************ 00:05:06.888 END TEST denied 00:05:06.888 ************************************ 00:05:06.888 11:47:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:06.888 11:47:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:06.888 11:47:05 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:06.888 11:47:05 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.888 11:47:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:06.888 ************************************ 00:05:06.888 START TEST allowed 00:05:06.888 ************************************ 00:05:06.888 11:47:05 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:06.888 11:47:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:06.888 11:47:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:06.888 11:47:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:06.888 11:47:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.888 11:47:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.292 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.292 11:47:07 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:05:08.292 11:47:07 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:08.292 11:47:07 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:08.292 11:47:07 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.292 11:47:07 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:08.860 00:05:08.860 real 0m1.966s 00:05:08.860 user 0m0.449s 00:05:08.860 sys 0m1.516s 00:05:08.860 ************************************ 00:05:08.860 END TEST allowed 00:05:08.860 ************************************ 00:05:08.860 11:47:07 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.860 11:47:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:08.860 00:05:08.860 real 0m5.647s 00:05:08.860 user 0m1.681s 00:05:08.860 sys 0m4.064s 00:05:08.860 11:47:07 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.860 ************************************ 00:05:08.860 END TEST acl 00:05:08.860 ************************************ 00:05:08.860 11:47:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:08.860 11:47:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:08.860 11:47:07 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.860 11:47:07 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.860 11:47:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.860 ************************************ 00:05:08.860 START TEST hugepages 00:05:08.860 ************************************ 00:05:08.860 11:47:07 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:08.860 * Looking for test storage... 00:05:08.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.860 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 1478552 kB' 'MemAvailable: 7392204 kB' 'Buffers: 40128 kB' 'Cached: 5952356 kB' 'SwapCached: 0 kB' 'Active: 1538352 kB' 'Inactive: 4572824 kB' 'Active(anon): 1068 kB' 'Inactive(anon): 129288 kB' 'Active(file): 1537284 kB' 'Inactive(file): 4443536 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 624 kB' 'Writeback: 0 kB' 'AnonPages: 148060 kB' 'Mapped: 69528 kB' 'Shmem: 2596 kB' 'KReclaimable: 254348 kB' 'Slab: 325528 kB' 'SReclaimable: 254348 kB' 'SUnreclaim: 71180 kB' 'KernelStack: 4472 kB' 'PageTables: 3820 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4024328 kB' 'Committed_AS: 505284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.861 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:08.862 11:47:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:08.862 11:47:07 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:08.862 11:47:07 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.862 11:47:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:08.862 ************************************ 00:05:08.862 START TEST default_setup 00:05:08.862 ************************************ 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:08.862 11:47:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:09.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:09.428 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.000 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3560664 kB' 'MemAvailable: 9474448 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589656 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 146080 kB' 'Active(file): 1537356 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'AnonPages: 164920 kB' 'Mapped: 68448 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325760 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71392 kB' 'KernelStack: 4520 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.001 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3560664 kB' 'MemAvailable: 9474448 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589656 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 146080 kB' 'Active(file): 1537356 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'AnonPages: 164920 kB' 'Mapped: 68448 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325760 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71392 kB' 'KernelStack: 4520 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.002 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.003 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3560916 kB' 'MemAvailable: 9474700 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589556 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 145980 kB' 'Active(file): 1537356 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'AnonPages: 164836 kB' 'Mapped: 68448 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325752 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71384 kB' 'KernelStack: 4424 kB' 'PageTables: 3368 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.004 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.005 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:10.006 nr_hugepages=1024 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:10.006 resv_hugepages=0 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:10.006 surplus_hugepages=0 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:10.006 anon_hugepages=0 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.006 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3561700 kB' 'MemAvailable: 9475484 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589036 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 145460 kB' 'Active(file): 1537356 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'AnonPages: 164316 kB' 'Mapped: 68448 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325752 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71384 kB' 'KernelStack: 4424 kB' 'PageTables: 3628 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.007 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.008 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:10.268 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3561960 kB' 'MemUsed: 8681004 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589036 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 145460 kB' 'Active(file): 1537356 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 732 kB' 'Writeback: 0 kB' 'FilePages: 5992596 kB' 'Mapped: 68448 kB' 'AnonPages: 164316 kB' 'Shmem: 2596 kB' 'KernelStack: 4424 kB' 'PageTables: 3368 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254368 kB' 'Slab: 325752 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:10.269 node0=1024 expecting 1024 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:10.269 00:05:10.269 real 0m1.184s 00:05:10.269 user 0m0.339s 00:05:10.269 sys 0m0.822s 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.269 11:47:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:10.269 ************************************ 00:05:10.269 END TEST default_setup 00:05:10.269 ************************************ 00:05:10.269 11:47:08 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:10.269 11:47:08 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:10.269 11:47:08 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.269 11:47:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:10.269 ************************************ 00:05:10.269 START TEST per_node_1G_alloc 00:05:10.269 ************************************ 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:10.269 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:10.270 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:10.270 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:10.270 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:10.270 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:10.270 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.270 11:47:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:10.528 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:10.528 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:11.098 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4616000 kB' 'MemAvailable: 10529788 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589252 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 145672 kB' 'Active(file): 1537356 kB' 'Inactive(file): 4443580 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 744 kB' 'Writeback: 0 kB' 'AnonPages: 164396 kB' 'Mapped: 68300 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325296 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 70928 kB' 'KernelStack: 4428 kB' 'PageTables: 4104 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 521484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.099 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4616244 kB' 'MemAvailable: 10530032 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538420 kB' 'Inactive: 4589200 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 145624 kB' 'Active(file): 1537360 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 748 kB' 'Writeback: 0 kB' 'AnonPages: 164308 kB' 'Mapped: 68260 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325296 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 70928 kB' 'KernelStack: 4356 kB' 'PageTables: 3680 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19460 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.100 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.101 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4616020 kB' 'MemAvailable: 10529808 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538404 kB' 'Inactive: 4589184 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145608 kB' 'Active(file): 1537360 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 748 kB' 'Writeback: 0 kB' 'AnonPages: 164244 kB' 'Mapped: 68188 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325392 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 4400 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19476 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.102 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:11.103 nr_hugepages=512 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:11.103 resv_hugepages=0 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:11.103 surplus_hugepages=0 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:11.103 anon_hugepages=0 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.103 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4616020 kB' 'MemAvailable: 10529808 kB' 'Buffers: 40128 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538404 kB' 'Inactive: 4589184 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145608 kB' 'Active(file): 1537360 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 748 kB' 'Writeback: 0 kB' 'AnonPages: 164244 kB' 'Mapped: 68188 kB' 'Shmem: 2596 kB' 'KReclaimable: 254368 kB' 'Slab: 325392 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71024 kB' 'KernelStack: 4468 kB' 'PageTables: 3608 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19492 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.104 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4616020 kB' 'MemUsed: 7626944 kB' 'SwapCached: 0 kB' 'Active: 1538404 kB' 'Inactive: 4589040 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145464 kB' 'Active(file): 1537360 kB' 'Inactive(file): 4443576 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 748 kB' 'Writeback: 0 kB' 'FilePages: 5992596 kB' 'Mapped: 68188 kB' 'AnonPages: 164064 kB' 'Shmem: 2596 kB' 'KernelStack: 4420 kB' 'PageTables: 3748 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254368 kB' 'Slab: 325392 kB' 'SReclaimable: 254368 kB' 'SUnreclaim: 71024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.105 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:11.106 node0=512 expecting 512 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:11.106 00:05:11.106 real 0m0.924s 00:05:11.106 user 0m0.279s 00:05:11.106 sys 0m0.685s 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.106 11:47:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:11.106 ************************************ 00:05:11.106 END TEST per_node_1G_alloc 00:05:11.106 ************************************ 00:05:11.106 11:47:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:11.106 11:47:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:11.106 11:47:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.106 11:47:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:11.106 ************************************ 00:05:11.106 START TEST even_2G_alloc 00:05:11.106 ************************************ 00:05:11.106 11:47:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:11.106 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:11.106 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:11.106 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.107 11:47:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:11.364 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:11.622 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.193 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3565188 kB' 'MemAvailable: 9479008 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538408 kB' 'Inactive: 4589404 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145832 kB' 'Active(file): 1537364 kB' 'Inactive(file): 4443572 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 864 kB' 'Writeback: 0 kB' 'AnonPages: 164784 kB' 'Mapped: 68408 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325472 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71072 kB' 'KernelStack: 4572 kB' 'PageTables: 4140 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.194 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3565188 kB' 'MemAvailable: 9479008 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589380 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 145808 kB' 'Active(file): 1537364 kB' 'Inactive(file): 4443572 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 164788 kB' 'Mapped: 68228 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325368 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70968 kB' 'KernelStack: 4492 kB' 'PageTables: 3960 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.195 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.196 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3565448 kB' 'MemAvailable: 9479268 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589120 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 145548 kB' 'Active(file): 1537364 kB' 'Inactive(file): 4443572 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 164528 kB' 'Mapped: 68228 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325368 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70968 kB' 'KernelStack: 4492 kB' 'PageTables: 3960 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.197 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.198 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:12.199 nr_hugepages=1024 00:05:12.199 resv_hugepages=0 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:12.199 surplus_hugepages=0 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:12.199 anon_hugepages=0 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3566228 kB' 'MemAvailable: 9480056 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4589084 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145512 kB' 'Active(file): 1537372 kB' 'Inactive(file): 4443572 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'AnonPages: 164164 kB' 'Mapped: 68228 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325392 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70992 kB' 'KernelStack: 4464 kB' 'PageTables: 3480 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 519384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19508 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.199 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.200 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3566488 kB' 'MemUsed: 8676476 kB' 'SwapCached: 0 kB' 'Active: 1538416 kB' 'Inactive: 4588824 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 145252 kB' 'Active(file): 1537372 kB' 'Inactive(file): 4443572 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 644 kB' 'Writeback: 0 kB' 'FilePages: 5992604 kB' 'Mapped: 68228 kB' 'AnonPages: 163904 kB' 'Shmem: 2596 kB' 'KernelStack: 4396 kB' 'PageTables: 3480 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254400 kB' 'Slab: 325392 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.201 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:12.202 node0=1024 expecting 1024 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:12.202 00:05:12.202 real 0m0.955s 00:05:12.202 user 0m0.252s 00:05:12.202 sys 0m0.743s 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:12.202 11:47:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:12.202 ************************************ 00:05:12.202 END TEST even_2G_alloc 00:05:12.202 ************************************ 00:05:12.202 11:47:10 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:12.202 11:47:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:12.202 11:47:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:12.202 11:47:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:12.202 ************************************ 00:05:12.202 START TEST odd_alloc 00:05:12.202 ************************************ 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.202 11:47:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:12.461 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:12.461 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:13.405 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:13.405 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:13.405 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:13.405 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3567744 kB' 'MemAvailable: 9481572 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538472 kB' 'Inactive: 4584588 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 141056 kB' 'Active(file): 1537412 kB' 'Inactive(file): 4443532 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 112 kB' 'Writeback: 0 kB' 'AnonPages: 159696 kB' 'Mapped: 67328 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325408 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71008 kB' 'KernelStack: 4264 kB' 'PageTables: 3036 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.406 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3567744 kB' 'MemAvailable: 9481572 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538476 kB' 'Inactive: 4584528 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 141000 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443528 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'AnonPages: 159600 kB' 'Mapped: 67124 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325408 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71008 kB' 'KernelStack: 4280 kB' 'PageTables: 3108 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.407 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.408 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3567520 kB' 'MemAvailable: 9481348 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538460 kB' 'Inactive: 4584296 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 140768 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443528 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'AnonPages: 159384 kB' 'Mapped: 67124 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325416 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71016 kB' 'KernelStack: 4244 kB' 'PageTables: 3156 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.409 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:13.410 nr_hugepages=1025 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:13.410 resv_hugepages=0 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:13.410 surplus_hugepages=0 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:13.410 anon_hugepages=0 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.410 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3567736 kB' 'MemAvailable: 9481564 kB' 'Buffers: 40136 kB' 'Cached: 5952468 kB' 'SwapCached: 0 kB' 'Active: 1538452 kB' 'Inactive: 4584496 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 140968 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443528 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'AnonPages: 159548 kB' 'Mapped: 67164 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325412 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71012 kB' 'KernelStack: 4304 kB' 'PageTables: 3056 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5071880 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.411 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3568492 kB' 'MemUsed: 8674472 kB' 'SwapCached: 0 kB' 'Active: 1538452 kB' 'Inactive: 4584340 kB' 'Active(anon): 1036 kB' 'Inactive(anon): 140812 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443528 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 168 kB' 'Writeback: 0 kB' 'FilePages: 5992604 kB' 'Mapped: 67164 kB' 'AnonPages: 159388 kB' 'Shmem: 2596 kB' 'KernelStack: 4224 kB' 'PageTables: 3092 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254400 kB' 'Slab: 325412 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.412 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:13.413 node0=1025 expecting 1025 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:13.413 00:05:13.413 real 0m1.226s 00:05:13.413 user 0m0.316s 00:05:13.413 sys 0m0.944s 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.413 11:47:12 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:13.413 ************************************ 00:05:13.413 END TEST odd_alloc 00:05:13.413 ************************************ 00:05:13.413 11:47:12 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:13.413 11:47:12 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:13.413 11:47:12 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.413 11:47:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:13.413 ************************************ 00:05:13.413 START TEST custom_alloc 00:05:13.413 ************************************ 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:13.413 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:13.414 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:13.414 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:13.414 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.414 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:13.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:13.673 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4623684 kB' 'MemAvailable: 10537516 kB' 'Buffers: 40136 kB' 'Cached: 5952472 kB' 'SwapCached: 0 kB' 'Active: 1538476 kB' 'Inactive: 4584616 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 141084 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443532 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 172 kB' 'Writeback: 0 kB' 'AnonPages: 159656 kB' 'Mapped: 67160 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325148 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70748 kB' 'KernelStack: 4416 kB' 'PageTables: 3204 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 505632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.248 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4623684 kB' 'MemAvailable: 10537516 kB' 'Buffers: 40136 kB' 'Cached: 5952472 kB' 'SwapCached: 0 kB' 'Active: 1538468 kB' 'Inactive: 4584796 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141264 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443532 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 172 kB' 'Writeback: 0 kB' 'AnonPages: 159796 kB' 'Mapped: 67160 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325148 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70748 kB' 'KernelStack: 4384 kB' 'PageTables: 3116 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 505632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.249 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4623684 kB' 'MemAvailable: 10537580 kB' 'Buffers: 40136 kB' 'Cached: 5952528 kB' 'SwapCached: 0 kB' 'Active: 1538468 kB' 'Inactive: 4584636 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141040 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443596 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 159624 kB' 'Mapped: 67164 kB' 'Shmem: 2588 kB' 'KReclaimable: 254400 kB' 'Slab: 325148 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70748 kB' 'KernelStack: 4364 kB' 'PageTables: 3260 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 505632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19340 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.250 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:12 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.251 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:14.252 nr_hugepages=512 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:14.252 resv_hugepages=0 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:14.252 surplus_hugepages=0 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:14.252 anon_hugepages=0 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.252 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4623684 kB' 'MemAvailable: 10537576 kB' 'Buffers: 40136 kB' 'Cached: 5952524 kB' 'SwapCached: 0 kB' 'Active: 1538468 kB' 'Inactive: 4584440 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 140848 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'AnonPages: 159416 kB' 'Mapped: 67164 kB' 'Shmem: 2588 kB' 'KReclaimable: 254400 kB' 'Slab: 325148 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70748 kB' 'KernelStack: 4304 kB' 'PageTables: 2968 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5597192 kB' 'Committed_AS: 505632 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19356 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.253 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.254 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 4623684 kB' 'MemUsed: 7619280 kB' 'SwapCached: 0 kB' 'Active: 1538468 kB' 'Inactive: 4584180 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 140588 kB' 'Active(file): 1537416 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 312 kB' 'Writeback: 0 kB' 'FilePages: 5992660 kB' 'Mapped: 67164 kB' 'AnonPages: 159416 kB' 'Shmem: 2588 kB' 'KernelStack: 4372 kB' 'PageTables: 3228 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254400 kB' 'Slab: 325148 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70748 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.255 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.256 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:14.257 node0=512 expecting 512 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:14.257 00:05:14.257 real 0m0.846s 00:05:14.257 user 0m0.306s 00:05:14.257 sys 0m0.576s 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.257 ************************************ 00:05:14.257 END TEST custom_alloc 00:05:14.257 11:47:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:14.257 ************************************ 00:05:14.257 11:47:13 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:14.257 11:47:13 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:14.257 11:47:13 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.257 11:47:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:14.529 ************************************ 00:05:14.529 START TEST no_shrink_alloc 00:05:14.529 ************************************ 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.529 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:14.795 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:14.795 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3571124 kB' 'MemAvailable: 9485020 kB' 'Buffers: 40136 kB' 'Cached: 5952536 kB' 'SwapCached: 0 kB' 'Active: 1538472 kB' 'Inactive: 4584684 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141092 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 159724 kB' 'Mapped: 67568 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325072 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70672 kB' 'KernelStack: 4352 kB' 'PageTables: 3216 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.364 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.365 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3571124 kB' 'MemAvailable: 9485020 kB' 'Buffers: 40136 kB' 'Cached: 5952536 kB' 'SwapCached: 0 kB' 'Active: 1538472 kB' 'Inactive: 4584580 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 140988 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 159564 kB' 'Mapped: 67568 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325072 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70672 kB' 'KernelStack: 4320 kB' 'PageTables: 3136 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.366 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3571364 kB' 'MemAvailable: 9485264 kB' 'Buffers: 40136 kB' 'Cached: 5952540 kB' 'SwapCached: 0 kB' 'Active: 1538480 kB' 'Inactive: 4584776 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 141180 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443596 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 159572 kB' 'Mapped: 67528 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325256 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70856 kB' 'KernelStack: 4300 kB' 'PageTables: 3108 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19396 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.367 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.368 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:15.369 nr_hugepages=1024 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.369 resv_hugepages=0 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.369 surplus_hugepages=0 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.369 anon_hugepages=0 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3571604 kB' 'MemAvailable: 9485504 kB' 'Buffers: 40136 kB' 'Cached: 5952540 kB' 'SwapCached: 0 kB' 'Active: 1538480 kB' 'Inactive: 4584556 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 140960 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443596 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 159612 kB' 'Mapped: 67528 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325160 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70760 kB' 'KernelStack: 4304 kB' 'PageTables: 3204 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.369 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3571856 kB' 'MemUsed: 8671108 kB' 'SwapCached: 0 kB' 'Active: 1538480 kB' 'Inactive: 4584696 kB' 'Active(anon): 1060 kB' 'Inactive(anon): 141100 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443596 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 480 kB' 'Writeback: 0 kB' 'FilePages: 5992676 kB' 'Mapped: 67524 kB' 'AnonPages: 159520 kB' 'Shmem: 2596 kB' 'KernelStack: 4316 kB' 'PageTables: 3196 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254400 kB' 'Slab: 325160 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 70760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.370 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.371 node0=1024 expecting 1024 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.371 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:15.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:15.631 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:15.631 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3567440 kB' 'MemAvailable: 9481336 kB' 'Buffers: 40136 kB' 'Cached: 5952536 kB' 'SwapCached: 0 kB' 'Active: 1538472 kB' 'Inactive: 4585636 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 142044 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 160428 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325748 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71348 kB' 'KernelStack: 4408 kB' 'PageTables: 3672 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.631 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.632 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3567932 kB' 'MemAvailable: 9481828 kB' 'Buffers: 40136 kB' 'Cached: 5952536 kB' 'SwapCached: 0 kB' 'Active: 1538472 kB' 'Inactive: 4585144 kB' 'Active(anon): 1052 kB' 'Inactive(anon): 141552 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 159920 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325748 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71348 kB' 'KernelStack: 4348 kB' 'PageTables: 3644 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.633 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3568424 kB' 'MemAvailable: 9482320 kB' 'Buffers: 40136 kB' 'Cached: 5952536 kB' 'SwapCached: 0 kB' 'Active: 1538464 kB' 'Inactive: 4584656 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 141064 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 159876 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325700 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71300 kB' 'KernelStack: 4300 kB' 'PageTables: 3516 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19412 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.634 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.895 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:15.896 nr_hugepages=1024 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:15.896 resv_hugepages=0 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:15.896 surplus_hugepages=0 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:15.896 anon_hugepages=0 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3568424 kB' 'MemAvailable: 9482320 kB' 'Buffers: 40136 kB' 'Cached: 5952536 kB' 'SwapCached: 0 kB' 'Active: 1538464 kB' 'Inactive: 4584360 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 140768 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'AnonPages: 159596 kB' 'Mapped: 67288 kB' 'Shmem: 2596 kB' 'KReclaimable: 254400 kB' 'Slab: 325748 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71348 kB' 'KernelStack: 4356 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5072904 kB' 'Committed_AS: 506424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19444 kB' 'VmallocChunk: 0 kB' 'Percpu: 8256 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 145260 kB' 'DirectMap2M: 4048896 kB' 'DirectMap1G: 10485760 kB' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.896 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.897 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12242964 kB' 'MemFree: 3568424 kB' 'MemUsed: 8674540 kB' 'SwapCached: 0 kB' 'Active: 1538464 kB' 'Inactive: 4584540 kB' 'Active(anon): 1044 kB' 'Inactive(anon): 140948 kB' 'Active(file): 1537420 kB' 'Inactive(file): 4443592 kB' 'Unevictable: 29172 kB' 'Mlocked: 27636 kB' 'Dirty: 484 kB' 'Writeback: 0 kB' 'FilePages: 5992672 kB' 'Mapped: 67288 kB' 'AnonPages: 159800 kB' 'Shmem: 2596 kB' 'KernelStack: 4388 kB' 'PageTables: 3580 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254400 kB' 'Slab: 325748 kB' 'SReclaimable: 254400 kB' 'SUnreclaim: 71348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.898 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:15.899 node0=1024 expecting 1024 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:15.899 00:05:15.899 real 0m1.456s 00:05:15.899 user 0m0.595s 00:05:15.899 sys 0m0.939s 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.899 11:47:14 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:15.899 ************************************ 00:05:15.899 END TEST no_shrink_alloc 00:05:15.899 ************************************ 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:15.899 11:47:14 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:15.899 00:05:15.899 real 0m7.038s 00:05:15.899 user 0m2.309s 00:05:15.900 sys 0m4.924s 00:05:15.900 11:47:14 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.900 ************************************ 00:05:15.900 END TEST hugepages 00:05:15.900 ************************************ 00:05:15.900 11:47:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:15.900 11:47:14 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:15.900 11:47:14 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.900 11:47:14 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.900 11:47:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:15.900 ************************************ 00:05:15.900 START TEST driver 00:05:15.900 ************************************ 00:05:15.900 11:47:14 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:15.900 * Looking for test storage... 00:05:15.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:15.900 11:47:14 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:15.900 11:47:14 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.900 11:47:14 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.464 11:47:15 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:16.464 11:47:15 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.464 11:47:15 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.464 11:47:15 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:16.464 ************************************ 00:05:16.464 START TEST guess_driver 00:05:16.464 ************************************ 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ N == Y ]] 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio.ko 00:05:16.464 insmod /lib/modules/5.15.0-101-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:16.464 Looking for driver=uio_pci_generic 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.464 11:47:15 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:17.031 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:17.031 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:17.031 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.031 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:17.031 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:17.031 11:47:15 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:17.978 11:47:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:17.978 11:47:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:17.978 11:47:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:17.978 11:47:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:18.543 00:05:18.543 real 0m1.957s 00:05:18.543 user 0m0.476s 00:05:18.543 sys 0m1.494s 00:05:18.543 11:47:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.543 11:47:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:18.543 ************************************ 00:05:18.543 END TEST guess_driver 00:05:18.543 ************************************ 00:05:18.543 00:05:18.543 real 0m2.529s 00:05:18.543 user 0m0.795s 00:05:18.543 sys 0m1.765s 00:05:18.543 11:47:17 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.543 ************************************ 00:05:18.543 END TEST driver 00:05:18.543 11:47:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:18.543 ************************************ 00:05:18.543 11:47:17 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:18.543 11:47:17 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:18.543 11:47:17 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.543 11:47:17 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:18.543 ************************************ 00:05:18.543 START TEST devices 00:05:18.543 ************************************ 00:05:18.543 11:47:17 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:18.543 * Looking for test storage... 00:05:18.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:18.543 11:47:17 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:18.543 11:47:17 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:18.543 11:47:17 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:18.543 11:47:17 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:19.108 11:47:17 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:19.108 No valid GPT data, bailing 00:05:19.108 11:47:17 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:19.108 11:47:17 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:19.108 11:47:17 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:19.108 11:47:17 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.108 11:47:17 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:19.108 ************************************ 00:05:19.108 START TEST nvme_mount 00:05:19.108 ************************************ 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:19.108 11:47:17 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:20.039 Creating new GPT entries in memory. 00:05:20.039 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:20.039 other utilities. 00:05:20.039 11:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:20.039 11:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.039 11:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:20.039 11:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:20.039 11:47:18 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:21.412 Creating new GPT entries in memory. 00:05:21.412 The operation has completed successfully. 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 116084 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:10.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.412 11:47:19 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:21.412 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.412 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:21.412 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:21.412 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.412 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.412 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.670 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:21.670 11:47:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.568 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.568 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.569 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.569 11:47:21 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.569 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.569 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:23.569 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.569 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:10.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:23.569 11:47:22 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:10.0 data@nvme0n1 '' '' 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:25.464 11:47:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:25.464 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:25.464 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:25.464 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:25.464 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.464 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:25.464 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:25.721 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:25.721 11:47:24 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:26.652 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.652 00:05:26.652 real 0m7.542s 00:05:26.652 user 0m0.687s 00:05:26.652 sys 0m4.860s 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:26.652 11:47:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:26.652 ************************************ 00:05:26.652 END TEST nvme_mount 00:05:26.652 ************************************ 00:05:26.652 11:47:25 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:26.652 11:47:25 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:26.652 11:47:25 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:26.652 11:47:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:26.652 ************************************ 00:05:26.652 START TEST dm_mount 00:05:26.652 ************************************ 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:26.652 11:47:25 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:28.023 Creating new GPT entries in memory. 00:05:28.023 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:28.023 other utilities. 00:05:28.023 11:47:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:28.023 11:47:26 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.023 11:47:26 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.023 11:47:26 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.023 11:47:26 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:28.957 Creating new GPT entries in memory. 00:05:28.957 The operation has completed successfully. 00:05:28.957 11:47:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:28.957 11:47:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:28.957 11:47:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:28.957 11:47:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:28.957 11:47:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:29.890 The operation has completed successfully. 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 116591 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.890 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:10.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.891 11:47:28 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:30.148 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:30.148 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:30.148 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:30.148 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.148 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:30.148 11:47:28 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.404 11:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:30.404 11:47:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:10.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:10.0 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:10.0 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.373 11:47:30 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:05:31.630 11:47:30 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.000 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.000 00:05:33.000 real 0m6.065s 00:05:33.000 user 0m0.466s 00:05:33.000 sys 0m2.443s 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.000 ************************************ 00:05:33.000 END TEST dm_mount 00:05:33.000 11:47:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:33.000 ************************************ 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.000 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.000 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.000 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.000 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.000 11:47:31 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:33.000 00:05:33.000 real 0m14.383s 00:05:33.000 user 0m1.536s 00:05:33.000 sys 0m7.687s 00:05:33.000 11:47:31 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.000 11:47:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:33.000 ************************************ 00:05:33.000 END TEST devices 00:05:33.000 ************************************ 00:05:33.000 00:05:33.000 real 0m29.900s 00:05:33.000 user 0m6.500s 00:05:33.000 sys 0m18.549s 00:05:33.000 11:47:31 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:33.000 ************************************ 00:05:33.000 END TEST setup.sh 00:05:33.000 11:47:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.000 ************************************ 00:05:33.000 11:47:31 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:33.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:33.258 Hugepages 00:05:33.258 node hugesize free / total 00:05:33.258 node0 1048576kB 0 / 0 00:05:33.258 node0 2048kB 2048 / 2048 00:05:33.258 00:05:33.258 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.515 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:33.515 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:33.515 11:47:32 -- spdk/autotest.sh@130 -- # uname -s 00:05:33.515 11:47:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:33.515 11:47:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:33.515 11:47:32 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.090 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:34.090 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.461 11:47:33 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:36.392 11:47:34 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:36.392 11:47:34 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:36.393 11:47:34 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.393 11:47:34 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:36.393 11:47:34 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:36.393 11:47:34 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:36.393 11:47:34 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.393 11:47:34 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.393 11:47:34 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:36.393 11:47:34 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:36.393 11:47:34 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:05:36.393 11:47:34 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.393 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:36.393 Waiting for block devices as requested 00:05:36.650 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.650 11:47:35 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:05:36.650 11:47:35 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:05:36.650 11:47:35 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]] 00:05:36.650 11:47:35 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:05:36.650 11:47:35 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1541 -- # grep oacs 00:05:36.650 11:47:35 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:05:36.650 11:47:35 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:05:36.650 11:47:35 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:05:36.650 11:47:35 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:05:36.650 11:47:35 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:05:36.650 11:47:35 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:05:36.650 11:47:35 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:05:36.650 11:47:35 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:05:36.650 11:47:35 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:05:36.650 11:47:35 -- common/autotest_common.sh@1553 -- # continue 00:05:36.650 11:47:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:36.650 11:47:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:36.650 11:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.650 11:47:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:36.650 11:47:35 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:36.650 11:47:35 -- common/autotest_common.sh@10 -- # set +x 00:05:36.650 11:47:35 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:05:37.215 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.586 11:47:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:38.586 11:47:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.586 11:47:37 -- common/autotest_common.sh@10 -- # set +x 00:05:38.586 11:47:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:38.586 11:47:37 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:05:38.586 11:47:37 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.586 11:47:37 -- common/autotest_common.sh@1573 -- # bdfs=() 00:05:38.586 11:47:37 -- common/autotest_common.sh@1573 -- # local bdfs 00:05:38.586 11:47:37 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:05:38.586 11:47:37 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:38.586 11:47:37 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:38.586 11:47:37 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.586 11:47:37 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.586 11:47:37 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:38.586 11:47:37 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:05:38.586 11:47:37 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:05:38.586 11:47:37 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:05:38.586 11:47:37 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:38.586 11:47:37 -- common/autotest_common.sh@1576 -- # device=0x0010 00:05:38.586 11:47:37 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.586 11:47:37 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:05:38.586 11:47:37 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:05:38.586 11:47:37 -- common/autotest_common.sh@1589 -- # return 0 00:05:38.586 11:47:37 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:05:38.587 11:47:37 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:38.587 11:47:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.587 11:47:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.587 11:47:37 -- common/autotest_common.sh@10 -- # set +x 00:05:38.587 ************************************ 00:05:38.587 START TEST unittest 00:05:38.587 ************************************ 00:05:38.587 11:47:37 unittest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:38.587 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:38.587 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:05:38.587 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:05:38.587 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:05:38.587 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:05:38.587 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:38.587 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:05:38.587 ++ rpc_py=rpc_cmd 00:05:38.587 ++ set -e 00:05:38.587 ++ shopt -s nullglob 00:05:38.587 ++ shopt -s extglob 00:05:38.587 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:05:38.587 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:38.587 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:38.587 +++ CONFIG_WPDK_DIR= 00:05:38.587 +++ CONFIG_ASAN=y 00:05:38.587 +++ CONFIG_VBDEV_COMPRESS=n 00:05:38.587 +++ CONFIG_HAVE_EXECINFO_H=y 00:05:38.587 +++ CONFIG_USDT=n 00:05:38.587 +++ CONFIG_CUSTOMOCF=n 00:05:38.587 +++ CONFIG_PREFIX=/usr/local 00:05:38.587 +++ CONFIG_RBD=n 00:05:38.587 +++ CONFIG_LIBDIR= 00:05:38.587 +++ CONFIG_IDXD=y 00:05:38.587 +++ CONFIG_NVME_CUSE=y 00:05:38.587 +++ CONFIG_SMA=n 00:05:38.587 +++ CONFIG_VTUNE=n 00:05:38.587 +++ CONFIG_TSAN=n 00:05:38.587 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:38.587 +++ CONFIG_VFIO_USER_DIR= 00:05:38.587 +++ CONFIG_PGO_CAPTURE=n 00:05:38.587 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:38.587 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:38.587 +++ CONFIG_LTO=n 00:05:38.587 +++ CONFIG_ISCSI_INITIATOR=y 00:05:38.587 +++ CONFIG_CET=n 00:05:38.587 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:38.587 +++ CONFIG_OCF_PATH= 00:05:38.587 +++ CONFIG_RDMA_SET_TOS=y 00:05:38.587 +++ CONFIG_HAVE_ARC4RANDOM=n 00:05:38.587 +++ CONFIG_HAVE_LIBARCHIVE=n 00:05:38.587 +++ CONFIG_UBLK=n 00:05:38.587 +++ CONFIG_ISAL_CRYPTO=y 00:05:38.587 +++ CONFIG_OPENSSL_PATH= 00:05:38.587 +++ CONFIG_OCF=n 00:05:38.587 +++ CONFIG_FUSE=n 00:05:38.587 +++ CONFIG_VTUNE_DIR= 00:05:38.587 +++ CONFIG_FUZZER_LIB= 00:05:38.587 +++ CONFIG_FUZZER=n 00:05:38.587 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:05:38.587 +++ CONFIG_CRYPTO=n 00:05:38.587 +++ CONFIG_PGO_USE=n 00:05:38.587 +++ CONFIG_VHOST=y 00:05:38.587 +++ CONFIG_DAOS=n 00:05:38.587 +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:05:38.587 +++ CONFIG_DAOS_DIR= 00:05:38.587 +++ CONFIG_UNIT_TESTS=y 00:05:38.587 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:38.587 +++ CONFIG_VIRTIO=y 00:05:38.587 +++ CONFIG_DPDK_UADK=n 00:05:38.587 +++ CONFIG_COVERAGE=y 00:05:38.587 +++ CONFIG_RDMA=y 00:05:38.587 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:38.587 +++ CONFIG_URING_PATH= 00:05:38.587 +++ CONFIG_XNVME=n 00:05:38.587 +++ CONFIG_VFIO_USER=n 00:05:38.587 +++ CONFIG_ARCH=native 00:05:38.587 +++ CONFIG_HAVE_EVP_MAC=y 00:05:38.587 +++ CONFIG_URING_ZNS=n 00:05:38.587 +++ CONFIG_WERROR=y 00:05:38.587 +++ CONFIG_HAVE_LIBBSD=n 00:05:38.587 +++ CONFIG_UBSAN=y 00:05:38.587 +++ CONFIG_IPSEC_MB_DIR= 00:05:38.587 +++ CONFIG_GOLANG=n 00:05:38.587 +++ CONFIG_ISAL=y 00:05:38.587 +++ CONFIG_IDXD_KERNEL=n 00:05:38.587 +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:38.587 +++ CONFIG_RDMA_PROV=verbs 00:05:38.587 +++ CONFIG_APPS=y 00:05:38.587 +++ CONFIG_SHARED=n 00:05:38.587 +++ CONFIG_HAVE_KEYUTILS=y 00:05:38.587 +++ CONFIG_FC_PATH= 00:05:38.587 +++ CONFIG_DPDK_PKG_CONFIG=n 00:05:38.587 +++ CONFIG_FC=n 00:05:38.587 +++ CONFIG_AVAHI=n 00:05:38.587 +++ CONFIG_FIO_PLUGIN=y 00:05:38.587 +++ CONFIG_RAID5F=y 00:05:38.587 +++ CONFIG_EXAMPLES=y 00:05:38.587 +++ CONFIG_TESTS=y 00:05:38.587 +++ CONFIG_CRYPTO_MLX5=n 00:05:38.587 +++ CONFIG_MAX_LCORES= 00:05:38.587 +++ CONFIG_IPSEC_MB=n 00:05:38.587 +++ CONFIG_PGO_DIR= 00:05:38.587 +++ CONFIG_DEBUG=y 00:05:38.587 +++ CONFIG_DPDK_COMPRESSDEV=n 00:05:38.587 +++ CONFIG_CROSS_PREFIX= 00:05:38.587 +++ CONFIG_URING=n 00:05:38.587 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:38.587 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:05:38.587 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:05:38.587 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:05:38.587 +++ _root=/home/vagrant/spdk_repo/spdk 00:05:38.587 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:05:38.587 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:05:38.587 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:05:38.587 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:38.587 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:38.587 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:38.587 +++ VHOST_APP=("$_app_dir/vhost") 00:05:38.587 +++ DD_APP=("$_app_dir/spdk_dd") 00:05:38.587 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:05:38.587 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:05:38.587 +++ [[ #ifndef SPDK_CONFIG_H 00:05:38.587 #define SPDK_CONFIG_H 00:05:38.587 #define SPDK_CONFIG_APPS 1 00:05:38.587 #define SPDK_CONFIG_ARCH native 00:05:38.587 #define SPDK_CONFIG_ASAN 1 00:05:38.587 #undef SPDK_CONFIG_AVAHI 00:05:38.587 #undef SPDK_CONFIG_CET 00:05:38.587 #define SPDK_CONFIG_COVERAGE 1 00:05:38.587 #define SPDK_CONFIG_CROSS_PREFIX 00:05:38.587 #undef SPDK_CONFIG_CRYPTO 00:05:38.587 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:38.587 #undef SPDK_CONFIG_CUSTOMOCF 00:05:38.587 #undef SPDK_CONFIG_DAOS 00:05:38.587 #define SPDK_CONFIG_DAOS_DIR 00:05:38.587 #define SPDK_CONFIG_DEBUG 1 00:05:38.587 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:38.587 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:05:38.587 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:05:38.587 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:05:38.587 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:38.587 #undef SPDK_CONFIG_DPDK_UADK 00:05:38.587 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:38.587 #define SPDK_CONFIG_EXAMPLES 1 00:05:38.587 #undef SPDK_CONFIG_FC 00:05:38.587 #define SPDK_CONFIG_FC_PATH 00:05:38.587 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:38.587 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:38.587 #undef SPDK_CONFIG_FUSE 00:05:38.587 #undef SPDK_CONFIG_FUZZER 00:05:38.587 #define SPDK_CONFIG_FUZZER_LIB 00:05:38.587 #undef SPDK_CONFIG_GOLANG 00:05:38.587 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:05:38.587 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:38.587 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:38.587 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:05:38.587 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:38.587 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:38.587 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:38.587 #define SPDK_CONFIG_IDXD 1 00:05:38.587 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:38.587 #undef SPDK_CONFIG_IPSEC_MB 00:05:38.587 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:38.587 #define SPDK_CONFIG_ISAL 1 00:05:38.587 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:38.587 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:38.587 #define SPDK_CONFIG_LIBDIR 00:05:38.587 #undef SPDK_CONFIG_LTO 00:05:38.587 #define SPDK_CONFIG_MAX_LCORES 00:05:38.587 #define SPDK_CONFIG_NVME_CUSE 1 00:05:38.587 #undef SPDK_CONFIG_OCF 00:05:38.587 #define SPDK_CONFIG_OCF_PATH 00:05:38.587 #define SPDK_CONFIG_OPENSSL_PATH 00:05:38.587 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:38.587 #define SPDK_CONFIG_PGO_DIR 00:05:38.587 #undef SPDK_CONFIG_PGO_USE 00:05:38.587 #define SPDK_CONFIG_PREFIX /usr/local 00:05:38.587 #define SPDK_CONFIG_RAID5F 1 00:05:38.587 #undef SPDK_CONFIG_RBD 00:05:38.587 #define SPDK_CONFIG_RDMA 1 00:05:38.587 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:38.587 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:38.587 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:38.587 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:38.587 #undef SPDK_CONFIG_SHARED 00:05:38.587 #undef SPDK_CONFIG_SMA 00:05:38.587 #define SPDK_CONFIG_TESTS 1 00:05:38.587 #undef SPDK_CONFIG_TSAN 00:05:38.587 #undef SPDK_CONFIG_UBLK 00:05:38.587 #define SPDK_CONFIG_UBSAN 1 00:05:38.587 #define SPDK_CONFIG_UNIT_TESTS 1 00:05:38.587 #undef SPDK_CONFIG_URING 00:05:38.587 #define SPDK_CONFIG_URING_PATH 00:05:38.587 #undef SPDK_CONFIG_URING_ZNS 00:05:38.587 #undef SPDK_CONFIG_USDT 00:05:38.587 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:38.587 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:38.587 #undef SPDK_CONFIG_VFIO_USER 00:05:38.587 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:38.587 #define SPDK_CONFIG_VHOST 1 00:05:38.587 #define SPDK_CONFIG_VIRTIO 1 00:05:38.587 #undef SPDK_CONFIG_VTUNE 00:05:38.587 #define SPDK_CONFIG_VTUNE_DIR 00:05:38.587 #define SPDK_CONFIG_WERROR 1 00:05:38.587 #define SPDK_CONFIG_WPDK_DIR 00:05:38.587 #undef SPDK_CONFIG_XNVME 00:05:38.587 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:38.587 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:38.587 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.587 +++ [[ -e /bin/wpdk_common.sh ]] 00:05:38.587 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.587 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.587 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:38.587 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:38.587 ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:38.587 ++++ export PATH 00:05:38.587 ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:05:38.587 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:38.588 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:05:38.588 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:38.588 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:05:38.588 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:05:38.588 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:05:38.588 +++ TEST_TAG=N/A 00:05:38.588 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:05:38.588 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:05:38.588 ++++ uname -s 00:05:38.588 +++ PM_OS=Linux 00:05:38.588 +++ MONITOR_RESOURCES_SUDO=() 00:05:38.588 +++ declare -A MONITOR_RESOURCES_SUDO 00:05:38.588 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:38.588 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:38.588 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:38.588 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:38.588 +++ SUDO[0]= 00:05:38.588 +++ SUDO[1]='sudo -E' 00:05:38.588 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:38.588 +++ [[ Linux == FreeBSD ]] 00:05:38.588 +++ [[ Linux == Linux ]] 00:05:38.588 +++ [[ QEMU != QEMU ]] 00:05:38.588 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:05:38.588 ++ : 1 00:05:38.588 ++ export RUN_NIGHTLY 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_RUN_VALGRIND 00:05:38.588 ++ : 1 00:05:38.588 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:05:38.588 ++ : 1 00:05:38.588 ++ export SPDK_TEST_UNITTEST 00:05:38.588 ++ : 00:05:38.588 ++ export SPDK_TEST_AUTOBUILD 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_RELEASE_BUILD 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_ISAL 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_ISCSI 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_ISCSI_INITIATOR 00:05:38.588 ++ : 1 00:05:38.588 ++ export SPDK_TEST_NVME 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_NVME_PMR 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_NVME_BP 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_NVME_CLI 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_NVME_CUSE 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_NVME_FDP 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_NVMF 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_VFIOUSER 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_VFIOUSER_QEMU 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_FUZZER 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_FUZZER_SHORT 00:05:38.588 ++ : rdma 00:05:38.588 ++ export SPDK_TEST_NVMF_TRANSPORT 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_RBD 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_VHOST 00:05:38.588 ++ : 1 00:05:38.588 ++ export SPDK_TEST_BLOCKDEV 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_IOAT 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_BLOBFS 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_VHOST_INIT 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_LVOL 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_VBDEV_COMPRESS 00:05:38.588 ++ : 1 00:05:38.588 ++ export SPDK_RUN_ASAN 00:05:38.588 ++ : 1 00:05:38.588 ++ export SPDK_RUN_UBSAN 00:05:38.588 ++ : /home/vagrant/spdk_repo/dpdk/build 00:05:38.588 ++ export SPDK_RUN_EXTERNAL_DPDK 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_RUN_NON_ROOT 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_CRYPTO 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_FTL 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_OCF 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_VMD 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_OPAL 00:05:38.588 ++ : v23.11 00:05:38.588 ++ export SPDK_TEST_NATIVE_DPDK 00:05:38.588 ++ : true 00:05:38.588 ++ export SPDK_AUTOTEST_X 00:05:38.588 ++ : 1 00:05:38.588 ++ export SPDK_TEST_RAID5 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_URING 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_USDT 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_USE_IGB_UIO 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_SCHEDULER 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_SCANBUILD 00:05:38.588 ++ : 00:05:38.588 ++ export SPDK_TEST_NVMF_NICS 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_SMA 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_DAOS 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_XNVME 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_ACCEL_DSA 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_ACCEL_IAA 00:05:38.588 ++ : 00:05:38.588 ++ export SPDK_TEST_FUZZER_TARGET 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_TEST_NVMF_MDNS 00:05:38.588 ++ : 0 00:05:38.588 ++ export SPDK_JSONRPC_GO_CLIENT 00:05:38.588 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:38.588 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:05:38.588 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:38.588 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:05:38.588 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:38.588 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:38.588 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:38.588 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:05:38.588 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:38.588 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:05:38.588 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:38.588 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:38.588 ++ export PYTHONDONTWRITEBYTECODE=1 00:05:38.588 ++ PYTHONDONTWRITEBYTECODE=1 00:05:38.588 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:38.588 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:38.588 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:38.588 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:38.588 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:05:38.588 ++ rm -rf /var/tmp/asan_suppression_file 00:05:38.588 ++ cat 00:05:38.588 ++ echo leak:libfuse3.so 00:05:38.588 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:38.588 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:38.588 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:38.588 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:38.588 ++ '[' -z /var/spdk/dependencies ']' 00:05:38.588 ++ export DEPENDENCY_DIR 00:05:38.588 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:38.588 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:05:38.588 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:38.588 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:05:38.588 ++ export QEMU_BIN= 00:05:38.588 ++ QEMU_BIN= 00:05:38.588 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:38.588 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:05:38.588 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:38.588 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:05:38.588 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:38.588 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:38.588 ++ '[' 0 -eq 0 ']' 00:05:38.588 ++ export valgrind= 00:05:38.588 ++ valgrind= 00:05:38.588 +++ uname -s 00:05:38.588 ++ '[' Linux = Linux ']' 00:05:38.588 ++ HUGEMEM=4096 00:05:38.588 ++ export CLEAR_HUGE=yes 00:05:38.588 ++ CLEAR_HUGE=yes 00:05:38.588 ++ [[ 0 -eq 1 ]] 00:05:38.588 ++ [[ 0 -eq 1 ]] 00:05:38.588 ++ MAKE=make 00:05:38.588 +++ nproc 00:05:38.588 ++ MAKEFLAGS=-j10 00:05:38.588 ++ export HUGEMEM=4096 00:05:38.588 ++ HUGEMEM=4096 00:05:38.588 ++ NO_HUGE=() 00:05:38.588 ++ TEST_MODE= 00:05:38.588 ++ [[ -z '' ]] 00:05:38.588 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:38.588 ++ exec 00:05:38.588 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:05:38.588 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:05:38.588 ++ set_test_storage 2147483648 00:05:38.588 ++ [[ -v testdir ]] 00:05:38.588 ++ local requested_size=2147483648 00:05:38.588 ++ local mount target_dir 00:05:38.588 ++ local -A mounts fss sizes avails uses 00:05:38.588 ++ local source fs size avail mount use 00:05:38.588 ++ local storage_fallback storage_candidates 00:05:38.588 +++ mktemp -udt spdk.XXXXXX 00:05:38.588 ++ storage_fallback=/tmp/spdk.SU1AK5 00:05:38.588 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:38.588 ++ [[ -n '' ]] 00:05:38.588 ++ [[ -n '' ]] 00:05:38.588 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.SU1AK5/tests/unit /tmp/spdk.SU1AK5 00:05:38.588 ++ requested_size=2214592512 00:05:38.588 ++ read -r source fs size use avail _ mount 00:05:38.588 +++ df -T 00:05:38.588 +++ grep -v Filesystem 00:05:38.588 ++ mounts["$mount"]=tmpfs 00:05:38.588 ++ fss["$mount"]=tmpfs 00:05:38.588 ++ avails["$mount"]=1252601856 00:05:38.588 ++ sizes["$mount"]=1253683200 00:05:38.588 ++ uses["$mount"]=1081344 00:05:38.588 ++ read -r source fs size use avail _ mount 00:05:38.588 ++ mounts["$mount"]=/dev/vda1 00:05:38.588 ++ fss["$mount"]=ext4 00:05:38.588 ++ avails["$mount"]=9011146752 00:05:38.588 ++ sizes["$mount"]=20616794112 00:05:38.588 ++ uses["$mount"]=11588870144 00:05:38.588 ++ read -r source fs size use avail _ mount 00:05:38.588 ++ mounts["$mount"]=tmpfs 00:05:38.588 ++ fss["$mount"]=tmpfs 00:05:38.589 ++ avails["$mount"]=6268395520 00:05:38.589 ++ sizes["$mount"]=6268395520 00:05:38.589 ++ uses["$mount"]=0 00:05:38.589 ++ read -r source fs size use avail _ mount 00:05:38.589 ++ mounts["$mount"]=tmpfs 00:05:38.589 ++ fss["$mount"]=tmpfs 00:05:38.589 ++ avails["$mount"]=5242880 00:05:38.589 ++ sizes["$mount"]=5242880 00:05:38.589 ++ uses["$mount"]=0 00:05:38.589 ++ read -r source fs size use avail _ mount 00:05:38.589 ++ mounts["$mount"]=/dev/vda15 00:05:38.589 ++ fss["$mount"]=vfat 00:05:38.589 ++ avails["$mount"]=103061504 00:05:38.589 ++ sizes["$mount"]=109395968 00:05:38.589 ++ uses["$mount"]=6334464 00:05:38.589 ++ read -r source fs size use avail _ mount 00:05:38.589 ++ mounts["$mount"]=tmpfs 00:05:38.589 ++ fss["$mount"]=tmpfs 00:05:38.589 ++ avails["$mount"]=1253675008 00:05:38.589 ++ sizes["$mount"]=1253679104 00:05:38.589 ++ uses["$mount"]=4096 00:05:38.589 ++ read -r source fs size use avail _ mount 00:05:38.589 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:05:38.589 ++ fss["$mount"]=fuse.sshfs 00:05:38.589 ++ avails["$mount"]=96194584576 00:05:38.589 ++ sizes["$mount"]=105088212992 00:05:38.589 ++ uses["$mount"]=3508195328 00:05:38.589 ++ read -r source fs size use avail _ mount 00:05:38.589 ++ printf '* Looking for test storage...\n' 00:05:38.589 * Looking for test storage... 00:05:38.589 ++ local target_space new_size 00:05:38.589 ++ for target_dir in "${storage_candidates[@]}" 00:05:38.589 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:05:38.589 +++ awk '$1 !~ /Filesystem/{print $6}' 00:05:38.589 ++ mount=/ 00:05:38.589 ++ target_space=9011146752 00:05:38.589 ++ (( target_space == 0 || target_space < requested_size )) 00:05:38.589 ++ (( target_space >= requested_size )) 00:05:38.589 ++ [[ ext4 == tmpfs ]] 00:05:38.589 ++ [[ ext4 == ramfs ]] 00:05:38.589 ++ [[ / == / ]] 00:05:38.589 ++ new_size=13803462656 00:05:38.589 ++ (( new_size * 100 / sizes[/] > 95 )) 00:05:38.589 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:38.589 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:05:38.589 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:05:38.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:05:38.589 ++ return 0 00:05:38.589 ++ set -o errtrace 00:05:38.589 ++ shopt -s extdebug 00:05:38.589 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:05:38.589 ++ PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@1683 -- # true 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@1685 -- # xtrace_fd 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@29 -- # exec 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:38.589 11:47:37 unittest -- common/autotest_common.sh@18 -- # set -x 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=gcc 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@181 -- # hash lcov 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@181 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@181 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@182 -- # cov_avail=yes 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@186 -- # '[' yes = yes ']' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@188 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@191 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@193 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@201 -- # export 'LCOV_OPTS= 00:05:38.589 --rc lcov_branch_coverage=1 00:05:38.589 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 ' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@201 -- # LCOV_OPTS=' 00:05:38.589 --rc lcov_branch_coverage=1 00:05:38.589 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 ' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@202 -- # export 'LCOV=lcov 00:05:38.589 --rc lcov_branch_coverage=1 00:05:38.589 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 --no-external' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@202 -- # LCOV='lcov 00:05:38.589 --rc lcov_branch_coverage=1 00:05:38.589 --rc lcov_function_coverage=1 00:05:38.589 --rc genhtml_branch_coverage=1 00:05:38.589 --rc genhtml_function_coverage=1 00:05:38.589 --rc genhtml_legend=1 00:05:38.589 --rc geninfo_all_blocks=1 00:05:38.589 --no-external' 00:05:38.589 11:47:37 unittest -- unit/unittest.sh@204 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:05:45.146 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:45.146 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:06:31.814 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:06:31.814 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:06:31.815 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:06:31.815 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:06:31.815 11:48:29 unittest -- unit/unittest.sh@208 -- # uname -m 00:06:31.815 11:48:29 unittest -- unit/unittest.sh@208 -- # '[' x86_64 = aarch64 ']' 00:06:31.815 11:48:29 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:31.815 11:48:29 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.815 11:48:29 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.815 11:48:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:31.815 ************************************ 00:06:31.815 START TEST unittest_pci_event 00:06:31.815 ************************************ 00:06:31.815 11:48:29 unittest.unittest_pci_event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:06:31.815 00:06:31.815 00:06:31.815 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.815 http://cunit.sourceforge.net/ 00:06:31.815 00:06:31.815 00:06:31.815 Suite: pci_event 00:06:31.815 Test: test_pci_parse_event ...[2024-07-21 11:48:29.077282] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:06:31.815 [2024-07-21 11:48:29.078808] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:06:31.815 passed 00:06:31.815 00:06:31.815 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.815 suites 1 1 n/a 0 0 00:06:31.815 tests 1 1 1 0 0 00:06:31.815 asserts 15 15 15 0 n/a 00:06:31.815 00:06:31.815 Elapsed time = 0.001 seconds 00:06:31.815 00:06:31.815 real 0m0.038s 00:06:31.815 user 0m0.015s 00:06:31.815 sys 0m0.016s 00:06:31.815 11:48:29 unittest.unittest_pci_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.815 11:48:29 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:06:31.816 ************************************ 00:06:31.816 END TEST unittest_pci_event 00:06:31.816 ************************************ 00:06:31.816 11:48:29 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:31.816 11:48:29 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.816 11:48:29 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.816 11:48:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:31.816 ************************************ 00:06:31.816 START TEST unittest_include 00:06:31.816 ************************************ 00:06:31.816 11:48:29 unittest.unittest_include -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:06:31.816 00:06:31.816 00:06:31.816 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.816 http://cunit.sourceforge.net/ 00:06:31.816 00:06:31.816 00:06:31.816 Suite: histogram 00:06:31.816 Test: histogram_test ...passed 00:06:31.816 Test: histogram_merge ...passed 00:06:31.816 00:06:31.816 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.816 suites 1 1 n/a 0 0 00:06:31.816 tests 2 2 2 0 0 00:06:31.816 asserts 50 50 50 0 n/a 00:06:31.816 00:06:31.816 Elapsed time = 0.006 seconds 00:06:31.816 00:06:31.816 real 0m0.035s 00:06:31.816 user 0m0.022s 00:06:31.816 sys 0m0.014s 00:06:31.816 11:48:29 unittest.unittest_include -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:31.816 11:48:29 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:06:31.816 ************************************ 00:06:31.816 END TEST unittest_include 00:06:31.816 ************************************ 00:06:31.816 11:48:29 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:06:31.816 11:48:29 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:31.816 11:48:29 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:31.816 11:48:29 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:31.816 ************************************ 00:06:31.816 START TEST unittest_bdev 00:06:31.816 ************************************ 00:06:31.816 11:48:29 unittest.unittest_bdev -- common/autotest_common.sh@1121 -- # unittest_bdev 00:06:31.816 11:48:29 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:06:31.816 00:06:31.816 00:06:31.816 CUnit - A unit testing framework for C - Version 2.1-3 00:06:31.816 http://cunit.sourceforge.net/ 00:06:31.816 00:06:31.816 00:06:31.816 Suite: bdev 00:06:31.816 Test: bytes_to_blocks_test ...passed 00:06:31.816 Test: num_blocks_test ...passed 00:06:31.816 Test: io_valid_test ...passed 00:06:31.816 Test: open_write_test ...[2024-07-21 11:48:29.338917] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:06:31.816 [2024-07-21 11:48:29.339392] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:06:31.816 [2024-07-21 11:48:29.339773] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:06:31.816 passed 00:06:31.816 Test: claim_test ...passed 00:06:31.816 Test: alias_add_del_test ...[2024-07-21 11:48:29.443486] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:06:31.816 [2024-07-21 11:48:29.443646] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4610:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:06:31.816 [2024-07-21 11:48:29.443779] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:06:31.816 passed 00:06:31.816 Test: get_device_stat_test ...passed 00:06:31.816 Test: bdev_io_types_test ...passed 00:06:31.816 Test: bdev_io_wait_test ...passed 00:06:31.816 Test: bdev_io_spans_split_test ...passed 00:06:31.816 Test: bdev_io_boundary_split_test ...passed 00:06:31.816 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-21 11:48:29.628565] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:06:31.816 passed 00:06:31.816 Test: bdev_io_mix_split_test ...passed 00:06:31.816 Test: bdev_io_split_with_io_wait ...passed 00:06:31.816 Test: bdev_io_write_unit_split_test ...[2024-07-21 11:48:29.753250] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:31.816 [2024-07-21 11:48:29.753354] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:06:31.816 [2024-07-21 11:48:29.753386] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:06:31.816 [2024-07-21 11:48:29.753431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2759:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:06:31.816 passed 00:06:31.816 Test: bdev_io_alignment_with_boundary ...passed 00:06:31.816 Test: bdev_io_alignment ...passed 00:06:31.816 Test: bdev_histograms ...passed 00:06:31.816 Test: bdev_write_zeroes ...passed 00:06:31.816 Test: bdev_compare_and_write ...passed 00:06:31.816 Test: bdev_compare ...passed 00:06:31.816 Test: bdev_compare_emulated ...passed 00:06:31.816 Test: bdev_zcopy_write ...passed 00:06:31.816 Test: bdev_zcopy_read ...passed 00:06:31.816 Test: bdev_open_while_hotremove ...passed 00:06:31.816 Test: bdev_close_while_hotremove ...passed 00:06:31.816 Test: bdev_open_ext_test ...passed 00:06:31.816 Test: bdev_open_ext_unregister ...passed 00:06:31.816 Test: bdev_set_io_timeout ...[2024-07-21 11:48:30.309002] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:31.816 [2024-07-21 11:48:30.309493] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8141:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:06:31.816 passed 00:06:31.816 Test: bdev_set_qd_sampling ...passed 00:06:31.816 Test: lba_range_overlap ...passed 00:06:31.816 Test: lock_lba_range_check_ranges ...passed 00:06:31.816 Test: lock_lba_range_with_io_outstanding ...passed 00:06:31.816 Test: lock_lba_range_overlapped ...passed 00:06:31.816 Test: bdev_quiesce ...[2024-07-21 11:48:30.584237] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10064:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:06:31.816 passed 00:06:31.816 Test: bdev_io_abort ...passed 00:06:32.075 Test: bdev_unmap ...passed 00:06:32.075 Test: bdev_write_zeroes_split_test ...passed 00:06:32.075 Test: bdev_set_options_test ...[2024-07-21 11:48:30.723857] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:06:32.075 passed 00:06:32.075 Test: bdev_get_memory_domains ...passed 00:06:32.075 Test: bdev_io_ext ...passed 00:06:32.075 Test: bdev_io_ext_no_opts ...passed 00:06:32.075 Test: bdev_io_ext_invalid_opts ...passed 00:06:32.075 Test: bdev_io_ext_split ...passed 00:06:32.075 Test: bdev_io_ext_bounce_buffer ...passed 00:06:32.334 Test: bdev_register_uuid_alias ...[2024-07-21 11:48:30.946860] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name 65897c8a-ed03-40cc-97d3-8ae70ef35a4a already exists 00:06:32.334 [2024-07-21 11:48:30.946958] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7696:bdev_register: *ERROR*: Unable to add uuid:65897c8a-ed03-40cc-97d3-8ae70ef35a4a alias for bdev bdev0 00:06:32.334 passed 00:06:32.334 Test: bdev_unregister_by_name ...[2024-07-21 11:48:30.968661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7931:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:06:32.334 [2024-07-21 11:48:30.968832] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7939:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:06:32.334 passed 00:06:32.334 Test: for_each_bdev_test ...passed 00:06:32.334 Test: bdev_seek_test ...passed 00:06:32.334 Test: bdev_copy ...passed 00:06:32.334 Test: bdev_copy_split_test ...passed 00:06:32.334 Test: examine_locks ...passed 00:06:32.334 Test: claim_v2_rwo ...[2024-07-21 11:48:31.091333] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:32.334 [2024-07-21 11:48:31.091738] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8665:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:32.334 [2024-07-21 11:48:31.091775] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.091848] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.091868] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.092170] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8660:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:06:32.335 passed 00:06:32.335 Test: claim_v2_rom ...[2024-07-21 11:48:31.092627] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.092704] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.092729] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.092755] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.092797] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8703:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:06:32.335 [2024-07-21 11:48:31.092940] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:32.335 passed 00:06:32.335 Test: claim_v2_rwm ...[2024-07-21 11:48:31.093392] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8733:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:32.335 [2024-07-21 11:48:31.093580] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8035:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.093685] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.093713] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.093811] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.093854] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8753:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.093975] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8733:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:06:32.335 passed 00:06:32.335 Test: claim_v2_existing_writer ...[2024-07-21 11:48:31.094368] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:32.335 [2024-07-21 11:48:31.094402] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:06:32.335 passed 00:06:32.335 Test: claim_v2_existing_v1 ...[2024-07-21 11:48:31.094726] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.094952] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.094986] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:06:32.335 passed 00:06:32.335 Test: claim_v1_existing_v2 ...[2024-07-21 11:48:31.095339] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.095421] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:06:32.335 [2024-07-21 11:48:31.095459] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8502:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:06:32.335 passed 00:06:32.335 Test: examine_claimed ...[2024-07-21 11:48:31.096139] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8830:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:06:32.335 passed 00:06:32.335 00:06:32.335 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.335 suites 1 1 n/a 0 0 00:06:32.335 tests 59 59 59 0 0 00:06:32.335 asserts 4599 4599 4599 0 n/a 00:06:32.335 00:06:32.335 Elapsed time = 1.837 seconds 00:06:32.335 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:06:32.335 00:06:32.335 00:06:32.335 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.335 http://cunit.sourceforge.net/ 00:06:32.335 00:06:32.335 00:06:32.335 Suite: nvme 00:06:32.335 Test: test_create_ctrlr ...passed 00:06:32.335 Test: test_reset_ctrlr ...[2024-07-21 11:48:31.153820] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 passed 00:06:32.335 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:06:32.335 Test: test_failover_ctrlr ...passed 00:06:32.335 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-21 11:48:31.156563] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.156808] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.157031] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 passed 00:06:32.335 Test: test_pending_reset ...[2024-07-21 11:48:31.158807] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.159071] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 passed 00:06:32.335 Test: test_attach_ctrlr ...[2024-07-21 11:48:31.160301] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4314:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:06:32.335 passed 00:06:32.335 Test: test_aer_cb ...passed 00:06:32.335 Test: test_submit_nvme_cmd ...passed 00:06:32.335 Test: test_add_remove_trid ...passed 00:06:32.335 Test: test_abort ...[2024-07-21 11:48:31.163943] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7453:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:06:32.335 passed 00:06:32.335 Test: test_get_io_qpair ...passed 00:06:32.335 Test: test_bdev_unregister ...passed 00:06:32.335 Test: test_compare_ns ...passed 00:06:32.335 Test: test_init_ana_log_page ...passed 00:06:32.335 Test: test_get_memory_domains ...passed 00:06:32.335 Test: test_reconnect_qpair ...[2024-07-21 11:48:31.166788] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 passed 00:06:32.335 Test: test_create_bdev_ctrlr ...[2024-07-21 11:48:31.167365] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5379:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:06:32.335 passed 00:06:32.335 Test: test_add_multi_ns_to_bdev ...[2024-07-21 11:48:31.168796] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4570:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:06:32.335 passed 00:06:32.335 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:06:32.335 Test: test_admin_path ...passed 00:06:32.335 Test: test_reset_bdev_ctrlr ...passed 00:06:32.335 Test: test_find_io_path ...passed 00:06:32.335 Test: test_retry_io_if_ana_state_is_updating ...passed 00:06:32.335 Test: test_retry_io_for_io_path_error ...passed 00:06:32.335 Test: test_retry_io_count ...passed 00:06:32.335 Test: test_concurrent_read_ana_log_page ...passed 00:06:32.335 Test: test_retry_io_for_ana_error ...passed 00:06:32.335 Test: test_check_io_error_resiliency_params ...[2024-07-21 11:48:31.176654] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6073:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:06:32.335 [2024-07-21 11:48:31.176736] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6077:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:32.335 [2024-07-21 11:48:31.176778] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6086:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:06:32.335 [2024-07-21 11:48:31.176808] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:06:32.335 [2024-07-21 11:48:31.176829] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6101:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:32.335 [2024-07-21 11:48:31.176860] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6101:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:06:32.335 [2024-07-21 11:48:31.176887] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6081:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:06:32.335 [2024-07-21 11:48:31.176940] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:06:32.335 [2024-07-21 11:48:31.176974] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6093:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:06:32.335 passed 00:06:32.335 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:06:32.335 Test: test_reconnect_ctrlr ...[2024-07-21 11:48:31.177833] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.177985] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.178334] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.178478] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.178642] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 passed 00:06:32.335 Test: test_retry_failover_ctrlr ...[2024-07-21 11:48:31.179014] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 passed 00:06:32.335 Test: test_fail_path ...[2024-07-21 11:48:31.179645] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.179801] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.179925] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.180029] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 [2024-07-21 11:48:31.180181] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.335 passed 00:06:32.336 Test: test_nvme_ns_cmp ...passed 00:06:32.336 Test: test_ana_transition ...passed 00:06:32.336 Test: test_set_preferred_path ...passed 00:06:32.336 Test: test_find_next_io_path ...passed 00:06:32.336 Test: test_find_io_path_min_qd ...passed 00:06:32.336 Test: test_disable_auto_failback ...[2024-07-21 11:48:31.181967] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.336 passed 00:06:32.336 Test: test_set_multipath_policy ...passed 00:06:32.336 Test: test_uuid_generation ...passed 00:06:32.336 Test: test_retry_io_to_same_path ...passed 00:06:32.336 Test: test_race_between_reset_and_disconnected ...passed 00:06:32.336 Test: test_ctrlr_op_rpc ...passed 00:06:32.336 Test: test_bdev_ctrlr_op_rpc ...passed 00:06:32.336 Test: test_disable_enable_ctrlr ...passed 00:06:32.336 Test: test_delete_ctrlr_done ...[2024-07-21 11:48:31.186225] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.336 [2024-07-21 11:48:31.186446] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2062:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:32.336 passed 00:06:32.336 Test: test_ns_remove_during_reset ...passed 00:06:32.336 00:06:32.336 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.336 suites 1 1 n/a 0 0 00:06:32.336 tests 48 48 48 0 0 00:06:32.336 asserts 3565 3565 3565 0 n/a 00:06:32.336 00:06:32.336 Elapsed time = 0.035 seconds 00:06:32.594 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:06:32.594 00:06:32.594 00:06:32.594 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.594 http://cunit.sourceforge.net/ 00:06:32.594 00:06:32.594 Test Options 00:06:32.594 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:06:32.594 00:06:32.594 Suite: raid 00:06:32.594 Test: test_create_raid ...passed 00:06:32.594 Test: test_create_raid_superblock ...passed 00:06:32.594 Test: test_delete_raid ...passed 00:06:32.594 Test: test_create_raid_invalid_args ...[2024-07-21 11:48:31.235565] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:06:32.594 [2024-07-21 11:48:31.236096] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:06:32.594 [2024-07-21 11:48:31.236825] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:06:32.594 [2024-07-21 11:48:31.237126] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:32.594 [2024-07-21 11:48:31.237221] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:32.594 [2024-07-21 11:48:31.238270] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:06:32.594 [2024-07-21 11:48:31.238332] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:06:32.594 passed 00:06:32.594 Test: test_delete_raid_invalid_args ...passed 00:06:32.594 Test: test_io_channel ...passed 00:06:32.594 Test: test_reset_io ...passed 00:06:32.594 Test: test_multi_raid ...passed 00:06:32.594 Test: test_io_type_supported ...passed 00:06:32.594 Test: test_raid_json_dump_info ...passed 00:06:32.594 Test: test_context_size ...passed 00:06:32.594 Test: test_raid_level_conversions ...passed 00:06:32.594 Test: test_raid_io_split ...passed 00:06:32.594 Test: test_raid_process ...passed 00:06:32.594 00:06:32.594 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.594 suites 1 1 n/a 0 0 00:06:32.594 tests 14 14 14 0 0 00:06:32.594 asserts 6183 6183 6183 0 n/a 00:06:32.594 00:06:32.594 Elapsed time = 0.025 seconds 00:06:32.594 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:06:32.594 00:06:32.594 00:06:32.594 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.594 http://cunit.sourceforge.net/ 00:06:32.594 00:06:32.594 00:06:32.594 Suite: raid_sb 00:06:32.594 Test: test_raid_bdev_write_superblock ...passed 00:06:32.594 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:32.594 Test: test_raid_bdev_parse_superblock ...[2024-07-21 11:48:31.299035] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:32.594 passed 00:06:32.594 Suite: raid_sb_md 00:06:32.594 Test: test_raid_bdev_write_superblock ...passed 00:06:32.594 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:32.594 Test: test_raid_bdev_parse_superblock ...passed 00:06:32.594 Suite: raid_sb_md_interleaved 00:06:32.594 Test: test_raid_bdev_write_superblock ...passed 00:06:32.594 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:06:32.594 Test: test_raid_bdev_parse_superblock ...passed 00:06:32.594 00:06:32.594 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.594 suites 3 3 n/a 0 0 00:06:32.594 tests 9 9 9 0 0 00:06:32.594 asserts 139 139 139 0 n/a 00:06:32.594 00:06:32.594 Elapsed time = 0.001 seconds 00:06:32.594 [2024-07-21 11:48:31.299503] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:32.594 [2024-07-21 11:48:31.299772] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:06:32.594 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:06:32.594 00:06:32.595 00:06:32.595 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.595 http://cunit.sourceforge.net/ 00:06:32.595 00:06:32.595 00:06:32.595 Suite: concat 00:06:32.595 Test: test_concat_start ...passed 00:06:32.595 Test: test_concat_rw ...passed 00:06:32.595 Test: test_concat_null_payload ...passed 00:06:32.595 00:06:32.595 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.595 suites 1 1 n/a 0 0 00:06:32.595 tests 3 3 3 0 0 00:06:32.595 asserts 8460 8460 8460 0 n/a 00:06:32.595 00:06:32.595 Elapsed time = 0.008 seconds 00:06:32.595 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:06:32.595 00:06:32.595 00:06:32.595 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.595 http://cunit.sourceforge.net/ 00:06:32.595 00:06:32.595 00:06:32.595 Suite: raid0 00:06:32.595 Test: test_write_io ...passed 00:06:32.595 Test: test_read_io ...passed 00:06:32.595 Test: test_unmap_io ...passed 00:06:32.595 Test: test_io_failure ...passed 00:06:32.595 Suite: raid0_dif 00:06:32.595 Test: test_write_io ...passed 00:06:32.595 Test: test_read_io ...passed 00:06:32.853 Test: test_unmap_io ...passed 00:06:32.853 Test: test_io_failure ...passed 00:06:32.853 00:06:32.853 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.853 suites 2 2 n/a 0 0 00:06:32.853 tests 8 8 8 0 0 00:06:32.853 asserts 368291 368291 368291 0 n/a 00:06:32.853 00:06:32.853 Elapsed time = 0.148 seconds 00:06:32.853 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:06:32.853 00:06:32.853 00:06:32.853 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.853 http://cunit.sourceforge.net/ 00:06:32.853 00:06:32.853 00:06:32.853 Suite: raid1 00:06:32.853 Test: test_raid1_start ...passed 00:06:32.853 Test: test_raid1_read_balancing ...passed 00:06:32.853 Test: test_raid1_write_error ...passed 00:06:32.853 Test: test_raid1_read_error ...passed 00:06:32.853 00:06:32.853 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.853 suites 1 1 n/a 0 0 00:06:32.853 tests 4 4 4 0 0 00:06:32.853 asserts 4374 4374 4374 0 n/a 00:06:32.853 00:06:32.853 Elapsed time = 0.005 seconds 00:06:32.853 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:06:32.854 00:06:32.854 00:06:32.854 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.854 http://cunit.sourceforge.net/ 00:06:32.854 00:06:32.854 00:06:32.854 Suite: zone 00:06:32.854 Test: test_zone_get_operation ...passed 00:06:32.854 Test: test_bdev_zone_get_info ...passed 00:06:32.854 Test: test_bdev_zone_management ...passed 00:06:32.854 Test: test_bdev_zone_append ...passed 00:06:32.854 Test: test_bdev_zone_append_with_md ...passed 00:06:32.854 Test: test_bdev_zone_appendv ...passed 00:06:32.854 Test: test_bdev_zone_appendv_with_md ...passed 00:06:32.854 Test: test_bdev_io_get_append_location ...passed 00:06:32.854 00:06:32.854 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.854 suites 1 1 n/a 0 0 00:06:32.854 tests 8 8 8 0 0 00:06:32.854 asserts 94 94 94 0 n/a 00:06:32.854 00:06:32.854 Elapsed time = 0.000 seconds 00:06:32.854 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:06:32.854 00:06:32.854 00:06:32.854 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.854 http://cunit.sourceforge.net/ 00:06:32.854 00:06:32.854 00:06:32.854 Suite: gpt_parse 00:06:32.854 Test: test_parse_mbr_and_primary ...[2024-07-21 11:48:31.643754] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:32.854 [2024-07-21 11:48:31.644187] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:32.854 [2024-07-21 11:48:31.644268] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:32.854 [2024-07-21 11:48:31.644381] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:32.854 [2024-07-21 11:48:31.644431] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:32.854 [2024-07-21 11:48:31.644538] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:32.854 passed 00:06:32.854 Test: test_parse_secondary ...[2024-07-21 11:48:31.645314] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:06:32.854 [2024-07-21 11:48:31.645379] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:06:32.854 [2024-07-21 11:48:31.645425] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:06:32.854 [2024-07-21 11:48:31.645494] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:06:32.854 passed 00:06:32.854 Test: test_check_mbr ...[2024-07-21 11:48:31.646258] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:32.854 passed 00:06:32.854 Test: test_read_header ...passed 00:06:32.854 Test: test_read_partitions ...[2024-07-21 11:48:31.646316] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:06:32.854 [2024-07-21 11:48:31.646379] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:06:32.854 [2024-07-21 11:48:31.646484] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:06:32.854 [2024-07-21 11:48:31.646607] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:06:32.854 [2024-07-21 11:48:31.646671] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:06:32.854 [2024-07-21 11:48:31.646721] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:06:32.854 [2024-07-21 11:48:31.646765] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:06:32.854 [2024-07-21 11:48:31.646833] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:06:32.854 [2024-07-21 11:48:31.646892] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:06:32.854 [2024-07-21 11:48:31.646928] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:06:32.854 [2024-07-21 11:48:31.646969] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:06:32.854 [2024-07-21 11:48:31.647374] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:06:32.854 passed 00:06:32.854 00:06:32.854 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.854 suites 1 1 n/a 0 0 00:06:32.854 tests 5 5 5 0 0 00:06:32.854 asserts 33 33 33 0 n/a 00:06:32.854 00:06:32.854 Elapsed time = 0.004 seconds 00:06:32.854 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:06:32.854 00:06:32.854 00:06:32.854 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.854 http://cunit.sourceforge.net/ 00:06:32.854 00:06:32.854 00:06:32.854 Suite: bdev_part 00:06:32.854 Test: part_test ...[2024-07-21 11:48:31.685548] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4580:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:06:32.854 passed 00:06:32.854 Test: part_free_test ...passed 00:06:33.113 Test: part_get_io_channel_test ...passed 00:06:33.113 Test: part_construct_ext ...passed 00:06:33.113 00:06:33.113 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.113 suites 1 1 n/a 0 0 00:06:33.113 tests 4 4 4 0 0 00:06:33.113 asserts 48 48 48 0 n/a 00:06:33.113 00:06:33.113 Elapsed time = 0.053 seconds 00:06:33.113 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:06:33.113 00:06:33.113 00:06:33.113 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.113 http://cunit.sourceforge.net/ 00:06:33.113 00:06:33.113 00:06:33.113 Suite: scsi_nvme_suite 00:06:33.113 Test: scsi_nvme_translate_test ...passed 00:06:33.113 00:06:33.113 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.113 suites 1 1 n/a 0 0 00:06:33.113 tests 1 1 1 0 0 00:06:33.113 asserts 104 104 104 0 n/a 00:06:33.113 00:06:33.113 Elapsed time = 0.000 seconds 00:06:33.113 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:06:33.113 00:06:33.113 00:06:33.113 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.113 http://cunit.sourceforge.net/ 00:06:33.113 00:06:33.113 00:06:33.113 Suite: lvol 00:06:33.113 Test: ut_lvs_init ...[2024-07-21 11:48:31.814141] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:06:33.113 [2024-07-21 11:48:31.814643] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:06:33.113 passed 00:06:33.113 Test: ut_lvol_init ...passed 00:06:33.113 Test: ut_lvol_snapshot ...passed 00:06:33.113 Test: ut_lvol_clone ...passed 00:06:33.113 Test: ut_lvs_destroy ...passed 00:06:33.113 Test: ut_lvs_unload ...passed 00:06:33.113 Test: ut_lvol_resize ...[2024-07-21 11:48:31.816238] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:06:33.113 passed 00:06:33.113 Test: ut_lvol_set_read_only ...passed 00:06:33.113 Test: ut_lvol_hotremove ...passed 00:06:33.113 Test: ut_vbdev_lvol_get_io_channel ...passed 00:06:33.113 Test: ut_vbdev_lvol_io_type_supported ...passed 00:06:33.113 Test: ut_lvol_read_write ...passed 00:06:33.113 Test: ut_vbdev_lvol_submit_request ...passed 00:06:33.113 Test: ut_lvol_examine_config ...passed 00:06:33.114 Test: ut_lvol_examine_disk ...[2024-07-21 11:48:31.816983] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:06:33.114 passed 00:06:33.114 Test: ut_lvol_rename ...[2024-07-21 11:48:31.818103] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:06:33.114 [2024-07-21 11:48:31.818233] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:06:33.114 passed 00:06:33.114 Test: ut_bdev_finish ...passed 00:06:33.114 Test: ut_lvs_rename ...passed 00:06:33.114 Test: ut_lvol_seek ...passed 00:06:33.114 Test: ut_esnap_dev_create ...[2024-07-21 11:48:31.819018] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:06:33.114 [2024-07-21 11:48:31.819105] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:06:33.114 [2024-07-21 11:48:31.819145] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:06:33.114 passed 00:06:33.114 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-21 11:48:31.819197] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1911:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:06:33.114 [2024-07-21 11:48:31.819388] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:06:33.114 [2024-07-21 11:48:31.819439] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:33.114 passed 00:06:33.114 Test: ut_lvol_shallow_copy ...[2024-07-21 11:48:31.819794] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:06:33.114 [2024-07-21 11:48:31.819858] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:06:33.114 passed 00:06:33.114 Test: ut_lvol_set_external_parent ...passed 00:06:33.114 00:06:33.114 [2024-07-21 11:48:31.819958] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:06:33.114 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.114 suites 1 1 n/a 0 0 00:06:33.114 tests 23 23 23 0 0 00:06:33.114 asserts 798 798 798 0 n/a 00:06:33.114 00:06:33.114 Elapsed time = 0.006 seconds 00:06:33.114 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:06:33.114 00:06:33.114 00:06:33.114 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.114 http://cunit.sourceforge.net/ 00:06:33.114 00:06:33.114 00:06:33.114 Suite: zone_block 00:06:33.114 Test: test_zone_block_create ...passed 00:06:33.114 Test: test_zone_block_create_invalid ...[2024-07-21 11:48:31.874565] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:06:33.114 [2024-07-21 11:48:31.874920] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-21 11:48:31.875111] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:06:33.114 [2024-07-21 11:48:31.875196] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-21 11:48:31.875354] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:06:33.114 [2024-07-21 11:48:31.875409] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-21 11:48:31.875506] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:06:33.114 [2024-07-21 11:48:31.875558] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:06:33.114 Test: test_get_zone_info ...[2024-07-21 11:48:31.876126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.876211] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.876277] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 Test: test_supported_io_types ...passed 00:06:33.114 Test: test_reset_zone ...[2024-07-21 11:48:31.877155] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.877242] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 Test: test_open_zone ...[2024-07-21 11:48:31.877723] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.878446] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.878539] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 Test: test_zone_write ...[2024-07-21 11:48:31.879086] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:33.114 [2024-07-21 11:48:31.879164] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.879240] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:33.114 [2024-07-21 11:48:31.879302] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.884998] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:06:33.114 [2024-07-21 11:48:31.885063] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.885148] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:06:33.114 [2024-07-21 11:48:31.885193] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 Test: test_zone_read ...[2024-07-21 11:48:31.890871] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:33.114 [2024-07-21 11:48:31.890970] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.891470] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:06:33.114 [2024-07-21 11:48:31.891522] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.891601] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:06:33.114 [2024-07-21 11:48:31.891640] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.892074] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:06:33.114 [2024-07-21 11:48:31.892155] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 Test: test_close_zone ...[2024-07-21 11:48:31.892538] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.892635] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.892871] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 Test: test_finish_zone ...[2024-07-21 11:48:31.892944] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.893566] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.893653] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 Test: test_append_zone ...[2024-07-21 11:48:31.894039] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:06:33.114 [2024-07-21 11:48:31.894093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.894160] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:06:33.114 [2024-07-21 11:48:31.894184] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 [2024-07-21 11:48:31.905451] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:06:33.114 [2024-07-21 11:48:31.905522] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:06:33.114 passed 00:06:33.114 00:06:33.114 Run Summary: Type Total Ran Passed Failed Inactive 00:06:33.114 suites 1 1 n/a 0 0 00:06:33.114 tests 11 11 11 0 0 00:06:33.114 asserts 3437 3437 3437 0 n/a 00:06:33.114 00:06:33.114 Elapsed time = 0.032 seconds 00:06:33.114 11:48:31 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:06:33.373 00:06:33.373 00:06:33.373 CUnit - A unit testing framework for C - Version 2.1-3 00:06:33.373 http://cunit.sourceforge.net/ 00:06:33.373 00:06:33.373 00:06:33.373 Suite: bdev 00:06:33.373 Test: basic ...[2024-07-21 11:48:32.010561] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x562e4f61a8a1): Operation not permitted (rc=-1) 00:06:33.373 [2024-07-21 11:48:32.010946] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x562e4f61a860): Operation not permitted (rc=-1) 00:06:33.373 [2024-07-21 11:48:32.011003] thread.c:2369:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x562e4f61a8a1): Operation not permitted (rc=-1) 00:06:33.373 passed 00:06:33.373 Test: unregister_and_close ...passed 00:06:33.373 Test: unregister_and_close_different_threads ...passed 00:06:33.373 Test: basic_qos ...passed 00:06:33.631 Test: put_channel_during_reset ...passed 00:06:33.631 Test: aborted_reset ...passed 00:06:33.631 Test: aborted_reset_no_outstanding_io ...passed 00:06:33.631 Test: io_during_reset ...passed 00:06:33.631 Test: reset_completions ...passed 00:06:33.631 Test: io_during_qos_queue ...passed 00:06:33.890 Test: io_during_qos_reset ...passed 00:06:33.890 Test: enomem ...passed 00:06:33.890 Test: enomem_multi_bdev ...passed 00:06:33.890 Test: enomem_multi_bdev_unregister ...passed 00:06:33.890 Test: enomem_multi_io_target ...passed 00:06:33.890 Test: qos_dynamic_enable ...passed 00:06:33.890 Test: bdev_histograms_mt ...passed 00:06:34.148 Test: bdev_set_io_timeout_mt ...[2024-07-21 11:48:32.772200] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:06:34.149 passed 00:06:34.149 Test: lock_lba_range_then_submit_io ...[2024-07-21 11:48:32.792210] thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x562e4f61a820 already registered (old:0x6130000003c0 new:0x613000000c80) 00:06:34.149 passed 00:06:34.149 Test: unregister_during_reset ...passed 00:06:34.149 Test: event_notify_and_close ...passed 00:06:34.149 Test: unregister_and_qos_poller ...passed 00:06:34.149 Suite: bdev_wrong_thread 00:06:34.149 Test: spdk_bdev_register_wt ...[2024-07-21 11:48:32.948803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8459:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x618000001480 (0x618000001480) 00:06:34.149 passed 00:06:34.149 Test: spdk_bdev_examine_wt ...[2024-07-21 11:48:32.949154] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 810:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000001480 (0x618000001480) 00:06:34.149 passed 00:06:34.149 00:06:34.149 Run Summary: Type Total Ran Passed Failed Inactive 00:06:34.149 suites 2 2 n/a 0 0 00:06:34.149 tests 24 24 24 0 0 00:06:34.149 asserts 621 621 621 0 n/a 00:06:34.149 00:06:34.149 Elapsed time = 0.968 seconds 00:06:34.149 00:06:34.149 real 0m3.736s 00:06:34.149 user 0m1.801s 00:06:34.149 sys 0m1.930s 00:06:34.149 11:48:32 unittest.unittest_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:34.149 11:48:32 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:34.149 ************************************ 00:06:34.149 END TEST unittest_bdev 00:06:34.149 ************************************ 00:06:34.149 11:48:33 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:34.406 11:48:33 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:34.406 11:48:33 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:34.407 11:48:33 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:34.407 11:48:33 unittest -- unit/unittest.sh@230 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:34.407 11:48:33 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.407 11:48:33 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.407 11:48:33 unittest -- common/autotest_common.sh@10 -- # set +x 00:06:34.407 ************************************ 00:06:34.407 START TEST unittest_bdev_raid5f 00:06:34.407 ************************************ 00:06:34.407 11:48:33 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:06:34.407 00:06:34.407 00:06:34.407 CUnit - A unit testing framework for C - Version 2.1-3 00:06:34.407 http://cunit.sourceforge.net/ 00:06:34.407 00:06:34.407 00:06:34.407 Suite: raid5f 00:06:34.407 Test: test_raid5f_start ...passed 00:06:34.972 Test: test_raid5f_submit_read_request ...passed 00:06:35.230 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:06:40.493 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:07:02.408 Test: test_raid5f_chunk_write_error ...passed 00:07:14.614 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:07:18.793 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:08:05.456 Test: test_raid5f_submit_read_request_degraded ...passed 00:08:05.456 00:08:05.456 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.456 suites 1 1 n/a 0 0 00:08:05.456 tests 8 8 8 0 0 00:08:05.456 asserts 518158 518158 518158 0 n/a 00:08:05.456 00:08:05.456 Elapsed time = 86.791 seconds 00:08:05.456 00:08:05.456 real 1m26.885s 00:08:05.456 user 1m22.524s 00:08:05.456 sys 0m4.356s 00:08:05.456 11:49:59 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:05.456 11:49:59 unittest.unittest_bdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:08:05.456 ************************************ 00:08:05.456 END TEST unittest_bdev_raid5f 00:08:05.456 ************************************ 00:08:05.456 11:49:59 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:08:05.456 11:49:59 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:05.456 11:49:59 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:05.456 11:49:59 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:05.456 ************************************ 00:08:05.456 START TEST unittest_blob_blobfs 00:08:05.456 ************************************ 00:08:05.456 11:49:59 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1121 -- # unittest_blob 00:08:05.456 11:49:59 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:08:05.456 11:49:59 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:08:05.456 00:08:05.456 00:08:05.456 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.456 http://cunit.sourceforge.net/ 00:08:05.456 00:08:05.456 00:08:05.456 Suite: blob_nocopy_noextent 00:08:05.456 Test: blob_init ...[2024-07-21 11:50:00.004329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:05.456 passed 00:08:05.456 Test: blob_thin_provision ...passed 00:08:05.456 Test: blob_read_only ...passed 00:08:05.456 Test: bs_load ...[2024-07-21 11:50:00.107825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:05.456 passed 00:08:05.456 Test: bs_load_custom_cluster_size ...passed 00:08:05.456 Test: bs_load_after_failed_grow ...passed 00:08:05.456 Test: bs_cluster_sz ...[2024-07-21 11:50:00.145467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:05.456 [2024-07-21 11:50:00.145976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:05.456 [2024-07-21 11:50:00.146197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:05.456 passed 00:08:05.456 Test: bs_resize_md ...passed 00:08:05.456 Test: bs_destroy ...passed 00:08:05.456 Test: bs_type ...passed 00:08:05.456 Test: bs_super_block ...passed 00:08:05.456 Test: bs_test_recover_cluster_count ...passed 00:08:05.456 Test: bs_grow_live ...passed 00:08:05.456 Test: bs_grow_live_no_space ...passed 00:08:05.456 Test: bs_test_grow ...passed 00:08:05.456 Test: blob_serialize_test ...passed 00:08:05.456 Test: super_block_crc ...passed 00:08:05.456 Test: blob_thin_prov_write_count_io ...passed 00:08:05.456 Test: blob_thin_prov_unmap_cluster ...passed 00:08:05.456 Test: bs_load_iter_test ...passed 00:08:05.456 Test: blob_relations ...[2024-07-21 11:50:00.379971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.456 [2024-07-21 11:50:00.380113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.456 [2024-07-21 11:50:00.381124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.456 [2024-07-21 11:50:00.381192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.456 passed 00:08:05.456 Test: blob_relations2 ...[2024-07-21 11:50:00.397806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.456 [2024-07-21 11:50:00.397920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.456 [2024-07-21 11:50:00.397958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.456 [2024-07-21 11:50:00.397986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.456 [2024-07-21 11:50:00.399477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.456 [2024-07-21 11:50:00.399541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:00.399951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.457 [2024-07-21 11:50:00.400012] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 passed 00:08:05.457 Test: blob_relations3 ...passed 00:08:05.457 Test: blobstore_clean_power_failure ...passed 00:08:05.457 Test: blob_delete_snapshot_power_failure ...[2024-07-21 11:50:00.588387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:05.457 [2024-07-21 11:50:00.603006] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:05.457 [2024-07-21 11:50:00.603123] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.457 [2024-07-21 11:50:00.603172] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:00.617675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:05.457 [2024-07-21 11:50:00.617785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:05.457 [2024-07-21 11:50:00.617820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.457 [2024-07-21 11:50:00.617880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:00.632372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:05.457 [2024-07-21 11:50:00.632517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:00.647128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:05.457 [2024-07-21 11:50:00.647290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:00.662037] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:05.457 [2024-07-21 11:50:00.662174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 passed 00:08:05.457 Test: blob_create_snapshot_power_failure ...[2024-07-21 11:50:00.706101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:05.457 [2024-07-21 11:50:00.734876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:05.457 [2024-07-21 11:50:00.749444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:05.457 passed 00:08:05.457 Test: blob_io_unit ...passed 00:08:05.457 Test: blob_io_unit_compatibility ...passed 00:08:05.457 Test: blob_ext_md_pages ...passed 00:08:05.457 Test: blob_esnap_io_4096_4096 ...passed 00:08:05.457 Test: blob_esnap_io_512_512 ...passed 00:08:05.457 Test: blob_esnap_io_4096_512 ...passed 00:08:05.457 Test: blob_esnap_io_512_4096 ...passed 00:08:05.457 Test: blob_esnap_clone_resize ...passed 00:08:05.457 Suite: blob_bs_nocopy_noextent 00:08:05.457 Test: blob_open ...passed 00:08:05.457 Test: blob_create ...[2024-07-21 11:50:01.076479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:05.457 passed 00:08:05.457 Test: blob_create_loop ...passed 00:08:05.457 Test: blob_create_fail ...[2024-07-21 11:50:01.187897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:05.457 passed 00:08:05.457 Test: blob_create_internal ...passed 00:08:05.457 Test: blob_create_zero_extent ...passed 00:08:05.457 Test: blob_snapshot ...passed 00:08:05.457 Test: blob_clone ...passed 00:08:05.457 Test: blob_inflate ...[2024-07-21 11:50:01.406718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:05.457 passed 00:08:05.457 Test: blob_delete ...passed 00:08:05.457 Test: blob_resize_test ...[2024-07-21 11:50:01.486182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:05.457 passed 00:08:05.457 Test: blob_resize_thin_test ...passed 00:08:05.457 Test: channel_ops ...passed 00:08:05.457 Test: blob_super ...passed 00:08:05.457 Test: blob_rw_verify_iov ...passed 00:08:05.457 Test: blob_unmap ...passed 00:08:05.457 Test: blob_iter ...passed 00:08:05.457 Test: blob_parse_md ...passed 00:08:05.457 Test: bs_load_pending_removal ...passed 00:08:05.457 Test: bs_unload ...[2024-07-21 11:50:01.849146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:05.457 passed 00:08:05.457 Test: bs_usable_clusters ...passed 00:08:05.457 Test: blob_crc ...[2024-07-21 11:50:01.929425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:05.457 [2024-07-21 11:50:01.929580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:05.457 passed 00:08:05.457 Test: blob_flags ...passed 00:08:05.457 Test: bs_version ...passed 00:08:05.457 Test: blob_set_xattrs_test ...[2024-07-21 11:50:02.052794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:05.457 [2024-07-21 11:50:02.052924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:05.457 passed 00:08:05.457 Test: blob_thin_prov_alloc ...passed 00:08:05.457 Test: blob_insert_cluster_msg_test ...passed 00:08:05.457 Test: blob_thin_prov_rw ...passed 00:08:05.457 Test: blob_thin_prov_rle ...passed 00:08:05.457 Test: blob_thin_prov_rw_iov ...passed 00:08:05.457 Test: blob_snapshot_rw ...passed 00:08:05.457 Test: blob_snapshot_rw_iov ...passed 00:08:05.457 Test: blob_inflate_rw ...passed 00:08:05.457 Test: blob_snapshot_freeze_io ...passed 00:08:05.457 Test: blob_operation_split_rw ...passed 00:08:05.457 Test: blob_operation_split_rw_iov ...passed 00:08:05.457 Test: blob_simultaneous_operations ...[2024-07-21 11:50:03.137920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:05.457 [2024-07-21 11:50:03.138069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:03.139327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:05.457 [2024-07-21 11:50:03.139399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:03.151600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:05.457 [2024-07-21 11:50:03.151705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 [2024-07-21 11:50:03.151858] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:05.457 [2024-07-21 11:50:03.151898] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.457 passed 00:08:05.457 Test: blob_persist_test ...passed 00:08:05.457 Test: blob_decouple_snapshot ...passed 00:08:05.457 Test: blob_seek_io_unit ...passed 00:08:05.457 Test: blob_nested_freezes ...passed 00:08:05.457 Test: blob_clone_resize ...passed 00:08:05.457 Test: blob_shallow_copy ...[2024-07-21 11:50:03.466220] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:05.457 [2024-07-21 11:50:03.466614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:05.457 [2024-07-21 11:50:03.466865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:05.457 passed 00:08:05.457 Suite: blob_blob_nocopy_noextent 00:08:05.457 Test: blob_write ...passed 00:08:05.457 Test: blob_read ...passed 00:08:05.457 Test: blob_rw_verify ...passed 00:08:05.457 Test: blob_rw_verify_iov_nomem ...passed 00:08:05.457 Test: blob_rw_iov_read_only ...passed 00:08:05.457 Test: blob_xattr ...passed 00:08:05.457 Test: blob_dirty_shutdown ...passed 00:08:05.457 Test: blob_is_degraded ...passed 00:08:05.457 Suite: blob_esnap_bs_nocopy_noextent 00:08:05.457 Test: blob_esnap_create ...passed 00:08:05.457 Test: blob_esnap_thread_add_remove ...passed 00:08:05.457 Test: blob_esnap_clone_snapshot ...passed 00:08:05.457 Test: blob_esnap_clone_inflate ...passed 00:08:05.457 Test: blob_esnap_clone_decouple ...passed 00:08:05.457 Test: blob_esnap_clone_reload ...passed 00:08:05.457 Test: blob_esnap_hotplug ...passed 00:08:05.457 Test: blob_set_parent ...[2024-07-21 11:50:04.130698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:05.457 [2024-07-21 11:50:04.130808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:05.457 [2024-07-21 11:50:04.130976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:05.457 [2024-07-21 11:50:04.131036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:05.457 [2024-07-21 11:50:04.131626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:05.457 passed 00:08:05.457 Test: blob_set_external_parent ...[2024-07-21 11:50:04.172165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:05.457 [2024-07-21 11:50:04.172286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:05.457 [2024-07-21 11:50:04.172321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:05.457 [2024-07-21 11:50:04.172800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:05.457 passed 00:08:05.457 Suite: blob_nocopy_extent 00:08:05.457 Test: blob_init ...[2024-07-21 11:50:04.186447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:05.457 passed 00:08:05.457 Test: blob_thin_provision ...passed 00:08:05.457 Test: blob_read_only ...passed 00:08:05.457 Test: bs_load ...[2024-07-21 11:50:04.242194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:05.457 passed 00:08:05.457 Test: bs_load_custom_cluster_size ...passed 00:08:05.457 Test: bs_load_after_failed_grow ...passed 00:08:05.457 Test: bs_cluster_sz ...[2024-07-21 11:50:04.272626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:05.457 [2024-07-21 11:50:04.272918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:05.457 [2024-07-21 11:50:04.272989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:05.457 passed 00:08:05.457 Test: bs_resize_md ...passed 00:08:05.457 Test: bs_destroy ...passed 00:08:05.716 Test: bs_type ...passed 00:08:05.716 Test: bs_super_block ...passed 00:08:05.716 Test: bs_test_recover_cluster_count ...passed 00:08:05.716 Test: bs_grow_live ...passed 00:08:05.716 Test: bs_grow_live_no_space ...passed 00:08:05.716 Test: bs_test_grow ...passed 00:08:05.716 Test: blob_serialize_test ...passed 00:08:05.716 Test: super_block_crc ...passed 00:08:05.716 Test: blob_thin_prov_write_count_io ...passed 00:08:05.716 Test: blob_thin_prov_unmap_cluster ...passed 00:08:05.716 Test: bs_load_iter_test ...passed 00:08:05.716 Test: blob_relations ...[2024-07-21 11:50:04.485016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.716 [2024-07-21 11:50:04.485167] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.716 [2024-07-21 11:50:04.486146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.716 [2024-07-21 11:50:04.486211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.716 passed 00:08:05.716 Test: blob_relations2 ...[2024-07-21 11:50:04.502647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.716 [2024-07-21 11:50:04.502775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.716 [2024-07-21 11:50:04.502810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.716 [2024-07-21 11:50:04.502841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.716 [2024-07-21 11:50:04.504297] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.716 [2024-07-21 11:50:04.504382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.716 [2024-07-21 11:50:04.504814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:05.716 [2024-07-21 11:50:04.504879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.716 passed 00:08:05.716 Test: blob_relations3 ...passed 00:08:05.975 Test: blobstore_clean_power_failure ...passed 00:08:05.975 Test: blob_delete_snapshot_power_failure ...[2024-07-21 11:50:04.695281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:05.975 [2024-07-21 11:50:04.710090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:05.975 [2024-07-21 11:50:04.724937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:05.975 [2024-07-21 11:50:04.725044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.975 [2024-07-21 11:50:04.725089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.975 [2024-07-21 11:50:04.739681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:05.975 [2024-07-21 11:50:04.739814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:05.975 [2024-07-21 11:50:04.739847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.975 [2024-07-21 11:50:04.739889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.975 [2024-07-21 11:50:04.754723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:05.975 [2024-07-21 11:50:04.754852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:05.975 [2024-07-21 11:50:04.754892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:05.975 [2024-07-21 11:50:04.754942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.975 [2024-07-21 11:50:04.770272] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:05.975 [2024-07-21 11:50:04.770412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.975 [2024-07-21 11:50:04.785042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:05.975 [2024-07-21 11:50:04.785193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.975 [2024-07-21 11:50:04.800128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:05.975 [2024-07-21 11:50:04.800259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:05.975 passed 00:08:06.233 Test: blob_create_snapshot_power_failure ...[2024-07-21 11:50:04.844168] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:06.233 [2024-07-21 11:50:04.858429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:06.233 [2024-07-21 11:50:04.886812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:06.233 [2024-07-21 11:50:04.901449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:06.233 passed 00:08:06.233 Test: blob_io_unit ...passed 00:08:06.233 Test: blob_io_unit_compatibility ...passed 00:08:06.233 Test: blob_ext_md_pages ...passed 00:08:06.233 Test: blob_esnap_io_4096_4096 ...passed 00:08:06.233 Test: blob_esnap_io_512_512 ...passed 00:08:06.233 Test: blob_esnap_io_4096_512 ...passed 00:08:06.491 Test: blob_esnap_io_512_4096 ...passed 00:08:06.491 Test: blob_esnap_clone_resize ...passed 00:08:06.491 Suite: blob_bs_nocopy_extent 00:08:06.491 Test: blob_open ...passed 00:08:06.491 Test: blob_create ...[2024-07-21 11:50:05.226118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:06.491 passed 00:08:06.491 Test: blob_create_loop ...passed 00:08:06.491 Test: blob_create_fail ...[2024-07-21 11:50:05.345230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:06.748 passed 00:08:06.748 Test: blob_create_internal ...passed 00:08:06.748 Test: blob_create_zero_extent ...passed 00:08:06.748 Test: blob_snapshot ...passed 00:08:06.748 Test: blob_clone ...passed 00:08:06.748 Test: blob_inflate ...[2024-07-21 11:50:05.565804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:06.748 passed 00:08:07.007 Test: blob_delete ...passed 00:08:07.007 Test: blob_resize_test ...[2024-07-21 11:50:05.644692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:07.007 passed 00:08:07.007 Test: blob_resize_thin_test ...passed 00:08:07.007 Test: channel_ops ...passed 00:08:07.007 Test: blob_super ...passed 00:08:07.007 Test: blob_rw_verify_iov ...passed 00:08:07.007 Test: blob_unmap ...passed 00:08:07.265 Test: blob_iter ...passed 00:08:07.265 Test: blob_parse_md ...passed 00:08:07.265 Test: bs_load_pending_removal ...passed 00:08:07.265 Test: bs_unload ...[2024-07-21 11:50:06.011661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:07.265 passed 00:08:07.265 Test: bs_usable_clusters ...passed 00:08:07.265 Test: blob_crc ...[2024-07-21 11:50:06.092880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:07.265 [2024-07-21 11:50:06.093034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:07.265 passed 00:08:07.524 Test: blob_flags ...passed 00:08:07.524 Test: bs_version ...passed 00:08:07.524 Test: blob_set_xattrs_test ...[2024-07-21 11:50:06.216247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:07.524 [2024-07-21 11:50:06.216367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:07.524 passed 00:08:07.524 Test: blob_thin_prov_alloc ...passed 00:08:07.782 Test: blob_insert_cluster_msg_test ...passed 00:08:07.782 Test: blob_thin_prov_rw ...passed 00:08:07.782 Test: blob_thin_prov_rle ...passed 00:08:07.782 Test: blob_thin_prov_rw_iov ...passed 00:08:07.782 Test: blob_snapshot_rw ...passed 00:08:07.782 Test: blob_snapshot_rw_iov ...passed 00:08:08.040 Test: blob_inflate_rw ...passed 00:08:08.299 Test: blob_snapshot_freeze_io ...passed 00:08:08.299 Test: blob_operation_split_rw ...passed 00:08:08.558 Test: blob_operation_split_rw_iov ...passed 00:08:08.558 Test: blob_simultaneous_operations ...[2024-07-21 11:50:07.305341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:08.558 [2024-07-21 11:50:07.305449] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:08.558 [2024-07-21 11:50:07.306634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:08.558 [2024-07-21 11:50:07.306686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:08.558 [2024-07-21 11:50:07.318471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:08.558 [2024-07-21 11:50:07.318546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:08.558 [2024-07-21 11:50:07.318697] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:08.558 [2024-07-21 11:50:07.318723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:08.558 passed 00:08:08.558 Test: blob_persist_test ...passed 00:08:08.817 Test: blob_decouple_snapshot ...passed 00:08:08.817 Test: blob_seek_io_unit ...passed 00:08:08.817 Test: blob_nested_freezes ...passed 00:08:08.817 Test: blob_clone_resize ...passed 00:08:08.817 Test: blob_shallow_copy ...[2024-07-21 11:50:07.639570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:08.817 [2024-07-21 11:50:07.639902] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:08.817 [2024-07-21 11:50:07.640127] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:08.817 passed 00:08:08.817 Suite: blob_blob_nocopy_extent 00:08:09.076 Test: blob_write ...passed 00:08:09.076 Test: blob_read ...passed 00:08:09.076 Test: blob_rw_verify ...passed 00:08:09.076 Test: blob_rw_verify_iov_nomem ...passed 00:08:09.076 Test: blob_rw_iov_read_only ...passed 00:08:09.076 Test: blob_xattr ...passed 00:08:09.334 Test: blob_dirty_shutdown ...passed 00:08:09.334 Test: blob_is_degraded ...passed 00:08:09.334 Suite: blob_esnap_bs_nocopy_extent 00:08:09.334 Test: blob_esnap_create ...passed 00:08:09.334 Test: blob_esnap_thread_add_remove ...passed 00:08:09.334 Test: blob_esnap_clone_snapshot ...passed 00:08:09.334 Test: blob_esnap_clone_inflate ...passed 00:08:09.334 Test: blob_esnap_clone_decouple ...passed 00:08:09.592 Test: blob_esnap_clone_reload ...passed 00:08:09.592 Test: blob_esnap_hotplug ...passed 00:08:09.592 Test: blob_set_parent ...[2024-07-21 11:50:08.291954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:09.592 [2024-07-21 11:50:08.292091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:09.592 [2024-07-21 11:50:08.292216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:09.592 [2024-07-21 11:50:08.292259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:09.592 [2024-07-21 11:50:08.292718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:09.592 passed 00:08:09.592 Test: blob_set_external_parent ...[2024-07-21 11:50:08.332849] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:09.592 [2024-07-21 11:50:08.332929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:09.592 [2024-07-21 11:50:08.332956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:09.592 [2024-07-21 11:50:08.333343] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:09.592 passed 00:08:09.592 Suite: blob_copy_noextent 00:08:09.592 Test: blob_init ...[2024-07-21 11:50:08.346820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:09.592 passed 00:08:09.592 Test: blob_thin_provision ...passed 00:08:09.592 Test: blob_read_only ...passed 00:08:09.592 Test: bs_load ...[2024-07-21 11:50:08.400423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:09.592 passed 00:08:09.592 Test: bs_load_custom_cluster_size ...passed 00:08:09.592 Test: bs_load_after_failed_grow ...passed 00:08:09.592 Test: bs_cluster_sz ...[2024-07-21 11:50:08.429193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:09.592 [2024-07-21 11:50:08.429463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:09.592 [2024-07-21 11:50:08.429511] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:09.592 passed 00:08:09.851 Test: bs_resize_md ...passed 00:08:09.851 Test: bs_destroy ...passed 00:08:09.851 Test: bs_type ...passed 00:08:09.851 Test: bs_super_block ...passed 00:08:09.851 Test: bs_test_recover_cluster_count ...passed 00:08:09.851 Test: bs_grow_live ...passed 00:08:09.851 Test: bs_grow_live_no_space ...passed 00:08:09.851 Test: bs_test_grow ...passed 00:08:09.851 Test: blob_serialize_test ...passed 00:08:09.851 Test: super_block_crc ...passed 00:08:09.851 Test: blob_thin_prov_write_count_io ...passed 00:08:09.851 Test: blob_thin_prov_unmap_cluster ...passed 00:08:09.851 Test: bs_load_iter_test ...passed 00:08:09.851 Test: blob_relations ...[2024-07-21 11:50:08.650243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:09.851 [2024-07-21 11:50:08.650375] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:09.851 [2024-07-21 11:50:08.651044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:09.851 [2024-07-21 11:50:08.651094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:09.851 passed 00:08:09.851 Test: blob_relations2 ...[2024-07-21 11:50:08.666843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:09.851 [2024-07-21 11:50:08.666945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:09.851 [2024-07-21 11:50:08.666980] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:09.851 [2024-07-21 11:50:08.666997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:09.851 [2024-07-21 11:50:08.667988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:09.851 [2024-07-21 11:50:08.668042] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:09.851 [2024-07-21 11:50:08.668352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:09.851 [2024-07-21 11:50:08.668394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:09.851 passed 00:08:09.851 Test: blob_relations3 ...passed 00:08:10.109 Test: blobstore_clean_power_failure ...passed 00:08:10.109 Test: blob_delete_snapshot_power_failure ...[2024-07-21 11:50:08.858082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:10.109 [2024-07-21 11:50:08.872403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:10.109 [2024-07-21 11:50:08.872518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:10.109 [2024-07-21 11:50:08.872550] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.109 [2024-07-21 11:50:08.886690] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:10.109 [2024-07-21 11:50:08.886789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:10.109 [2024-07-21 11:50:08.886816] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:10.109 [2024-07-21 11:50:08.886859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.109 [2024-07-21 11:50:08.901144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:10.109 [2024-07-21 11:50:08.901277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.109 [2024-07-21 11:50:08.915632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:10.109 [2024-07-21 11:50:08.915786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.109 [2024-07-21 11:50:08.930043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:10.109 [2024-07-21 11:50:08.930163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:10.109 passed 00:08:10.109 Test: blob_create_snapshot_power_failure ...[2024-07-21 11:50:08.973014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:10.368 [2024-07-21 11:50:09.001040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:08:10.368 [2024-07-21 11:50:09.015412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:10.368 passed 00:08:10.368 Test: blob_io_unit ...passed 00:08:10.368 Test: blob_io_unit_compatibility ...passed 00:08:10.368 Test: blob_ext_md_pages ...passed 00:08:10.368 Test: blob_esnap_io_4096_4096 ...passed 00:08:10.368 Test: blob_esnap_io_512_512 ...passed 00:08:10.368 Test: blob_esnap_io_4096_512 ...passed 00:08:10.368 Test: blob_esnap_io_512_4096 ...passed 00:08:10.627 Test: blob_esnap_clone_resize ...passed 00:08:10.627 Suite: blob_bs_copy_noextent 00:08:10.627 Test: blob_open ...passed 00:08:10.627 Test: blob_create ...[2024-07-21 11:50:09.332265] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:10.627 passed 00:08:10.627 Test: blob_create_loop ...passed 00:08:10.627 Test: blob_create_fail ...[2024-07-21 11:50:09.440605] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:10.627 passed 00:08:10.885 Test: blob_create_internal ...passed 00:08:10.885 Test: blob_create_zero_extent ...passed 00:08:10.885 Test: blob_snapshot ...passed 00:08:10.885 Test: blob_clone ...passed 00:08:10.885 Test: blob_inflate ...[2024-07-21 11:50:09.644039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:10.885 passed 00:08:10.885 Test: blob_delete ...passed 00:08:10.885 Test: blob_resize_test ...[2024-07-21 11:50:09.721082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:10.885 passed 00:08:11.144 Test: blob_resize_thin_test ...passed 00:08:11.144 Test: channel_ops ...passed 00:08:11.144 Test: blob_super ...passed 00:08:11.144 Test: blob_rw_verify_iov ...passed 00:08:11.144 Test: blob_unmap ...passed 00:08:11.144 Test: blob_iter ...passed 00:08:11.402 Test: blob_parse_md ...passed 00:08:11.402 Test: bs_load_pending_removal ...passed 00:08:11.402 Test: bs_unload ...[2024-07-21 11:50:10.082844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:11.402 passed 00:08:11.402 Test: bs_usable_clusters ...passed 00:08:11.402 Test: blob_crc ...[2024-07-21 11:50:10.163426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:11.402 [2024-07-21 11:50:10.163579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:11.402 passed 00:08:11.402 Test: blob_flags ...passed 00:08:11.402 Test: bs_version ...passed 00:08:11.661 Test: blob_set_xattrs_test ...[2024-07-21 11:50:10.286692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:11.661 [2024-07-21 11:50:10.286851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:11.661 passed 00:08:11.661 Test: blob_thin_prov_alloc ...passed 00:08:11.661 Test: blob_insert_cluster_msg_test ...passed 00:08:11.920 Test: blob_thin_prov_rw ...passed 00:08:11.920 Test: blob_thin_prov_rle ...passed 00:08:11.920 Test: blob_thin_prov_rw_iov ...passed 00:08:11.920 Test: blob_snapshot_rw ...passed 00:08:11.920 Test: blob_snapshot_rw_iov ...passed 00:08:12.178 Test: blob_inflate_rw ...passed 00:08:12.178 Test: blob_snapshot_freeze_io ...passed 00:08:12.436 Test: blob_operation_split_rw ...passed 00:08:12.697 Test: blob_operation_split_rw_iov ...passed 00:08:12.697 Test: blob_simultaneous_operations ...[2024-07-21 11:50:11.328327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:12.697 [2024-07-21 11:50:11.328502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.697 [2024-07-21 11:50:11.329195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:12.697 [2024-07-21 11:50:11.329245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.697 [2024-07-21 11:50:11.332537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:12.697 [2024-07-21 11:50:11.332602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.697 [2024-07-21 11:50:11.332734] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:12.697 [2024-07-21 11:50:11.332757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:12.697 passed 00:08:12.697 Test: blob_persist_test ...passed 00:08:12.697 Test: blob_decouple_snapshot ...passed 00:08:12.697 Test: blob_seek_io_unit ...passed 00:08:12.697 Test: blob_nested_freezes ...passed 00:08:12.961 Test: blob_clone_resize ...passed 00:08:12.961 Test: blob_shallow_copy ...[2024-07-21 11:50:11.595737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:12.961 [2024-07-21 11:50:11.596111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:12.961 [2024-07-21 11:50:11.596330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:12.961 passed 00:08:12.961 Suite: blob_blob_copy_noextent 00:08:12.962 Test: blob_write ...passed 00:08:12.962 Test: blob_read ...passed 00:08:12.962 Test: blob_rw_verify ...passed 00:08:12.962 Test: blob_rw_verify_iov_nomem ...passed 00:08:12.962 Test: blob_rw_iov_read_only ...passed 00:08:12.962 Test: blob_xattr ...passed 00:08:13.221 Test: blob_dirty_shutdown ...passed 00:08:13.221 Test: blob_is_degraded ...passed 00:08:13.221 Suite: blob_esnap_bs_copy_noextent 00:08:13.221 Test: blob_esnap_create ...passed 00:08:13.221 Test: blob_esnap_thread_add_remove ...passed 00:08:13.221 Test: blob_esnap_clone_snapshot ...passed 00:08:13.221 Test: blob_esnap_clone_inflate ...passed 00:08:13.221 Test: blob_esnap_clone_decouple ...passed 00:08:13.479 Test: blob_esnap_clone_reload ...passed 00:08:13.479 Test: blob_esnap_hotplug ...passed 00:08:13.479 Test: blob_set_parent ...[2024-07-21 11:50:12.158540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:13.479 [2024-07-21 11:50:12.158670] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:13.479 [2024-07-21 11:50:12.158787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:13.479 [2024-07-21 11:50:12.158833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:13.479 [2024-07-21 11:50:12.159273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:13.479 passed 00:08:13.479 Test: blob_set_external_parent ...[2024-07-21 11:50:12.193645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:13.479 [2024-07-21 11:50:12.193770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:13.479 [2024-07-21 11:50:12.193812] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:13.479 [2024-07-21 11:50:12.194179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:13.479 passed 00:08:13.479 Suite: blob_copy_extent 00:08:13.479 Test: blob_init ...[2024-07-21 11:50:12.205840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5490:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:08:13.479 passed 00:08:13.479 Test: blob_thin_provision ...passed 00:08:13.479 Test: blob_read_only ...passed 00:08:13.479 Test: bs_load ...[2024-07-21 11:50:12.252802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 965:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:08:13.479 passed 00:08:13.479 Test: bs_load_custom_cluster_size ...passed 00:08:13.479 Test: bs_load_after_failed_grow ...passed 00:08:13.479 Test: bs_cluster_sz ...[2024-07-21 11:50:12.278728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:08:13.479 [2024-07-21 11:50:12.278938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5621:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:08:13.479 [2024-07-21 11:50:12.278979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3883:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:08:13.479 passed 00:08:13.479 Test: bs_resize_md ...passed 00:08:13.479 Test: bs_destroy ...passed 00:08:13.479 Test: bs_type ...passed 00:08:13.737 Test: bs_super_block ...passed 00:08:13.737 Test: bs_test_recover_cluster_count ...passed 00:08:13.737 Test: bs_grow_live ...passed 00:08:13.737 Test: bs_grow_live_no_space ...passed 00:08:13.737 Test: bs_test_grow ...passed 00:08:13.737 Test: blob_serialize_test ...passed 00:08:13.737 Test: super_block_crc ...passed 00:08:13.737 Test: blob_thin_prov_write_count_io ...passed 00:08:13.737 Test: blob_thin_prov_unmap_cluster ...passed 00:08:13.737 Test: bs_load_iter_test ...passed 00:08:13.737 Test: blob_relations ...[2024-07-21 11:50:12.464443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:13.737 [2024-07-21 11:50:12.464594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.737 [2024-07-21 11:50:12.465260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:13.737 [2024-07-21 11:50:12.465338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.737 passed 00:08:13.737 Test: blob_relations2 ...[2024-07-21 11:50:12.479934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:13.737 [2024-07-21 11:50:12.480032] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.737 [2024-07-21 11:50:12.480090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:13.737 [2024-07-21 11:50:12.480126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.737 [2024-07-21 11:50:12.481251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:13.737 [2024-07-21 11:50:12.481311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.737 [2024-07-21 11:50:12.481672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:08:13.737 [2024-07-21 11:50:12.481721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.737 passed 00:08:13.737 Test: blob_relations3 ...passed 00:08:13.995 Test: blobstore_clean_power_failure ...passed 00:08:13.995 Test: blob_delete_snapshot_power_failure ...[2024-07-21 11:50:12.665311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:13.995 [2024-07-21 11:50:12.678462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:13.995 [2024-07-21 11:50:12.691833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:13.995 [2024-07-21 11:50:12.691951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:13.995 [2024-07-21 11:50:12.691995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.995 [2024-07-21 11:50:12.705080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:13.995 [2024-07-21 11:50:12.705191] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:13.995 [2024-07-21 11:50:12.705230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:13.995 [2024-07-21 11:50:12.705257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.995 [2024-07-21 11:50:12.718189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:13.995 [2024-07-21 11:50:12.721355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:08:13.995 [2024-07-21 11:50:12.721414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:08:13.995 [2024-07-21 11:50:12.721446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.995 [2024-07-21 11:50:12.734683] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:08:13.995 [2024-07-21 11:50:12.734802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.995 [2024-07-21 11:50:12.748176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:08:13.995 [2024-07-21 11:50:12.748311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.995 [2024-07-21 11:50:12.761520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:08:13.995 [2024-07-21 11:50:12.761648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:13.995 passed 00:08:13.995 Test: blob_create_snapshot_power_failure ...[2024-07-21 11:50:12.799998] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:08:13.995 [2024-07-21 11:50:12.812836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:08:13.995 [2024-07-21 11:50:12.838091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1669:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:08:13.995 [2024-07-21 11:50:12.851325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:08:14.252 passed 00:08:14.252 Test: blob_io_unit ...passed 00:08:14.252 Test: blob_io_unit_compatibility ...passed 00:08:14.252 Test: blob_ext_md_pages ...passed 00:08:14.252 Test: blob_esnap_io_4096_4096 ...passed 00:08:14.252 Test: blob_esnap_io_512_512 ...passed 00:08:14.252 Test: blob_esnap_io_4096_512 ...passed 00:08:14.252 Test: blob_esnap_io_512_4096 ...passed 00:08:14.252 Test: blob_esnap_clone_resize ...passed 00:08:14.252 Suite: blob_bs_copy_extent 00:08:14.252 Test: blob_open ...passed 00:08:14.509 Test: blob_create ...[2024-07-21 11:50:13.127020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:08:14.509 passed 00:08:14.509 Test: blob_create_loop ...passed 00:08:14.509 Test: blob_create_fail ...[2024-07-21 11:50:13.232636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:14.509 passed 00:08:14.509 Test: blob_create_internal ...passed 00:08:14.509 Test: blob_create_zero_extent ...passed 00:08:14.509 Test: blob_snapshot ...passed 00:08:14.766 Test: blob_clone ...passed 00:08:14.766 Test: blob_inflate ...[2024-07-21 11:50:13.406126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:08:14.766 passed 00:08:14.766 Test: blob_delete ...passed 00:08:14.766 Test: blob_resize_test ...[2024-07-21 11:50:13.475035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:08:14.766 passed 00:08:14.766 Test: blob_resize_thin_test ...passed 00:08:14.766 Test: channel_ops ...passed 00:08:14.766 Test: blob_super ...passed 00:08:15.022 Test: blob_rw_verify_iov ...passed 00:08:15.022 Test: blob_unmap ...passed 00:08:15.022 Test: blob_iter ...passed 00:08:15.022 Test: blob_parse_md ...passed 00:08:15.022 Test: bs_load_pending_removal ...passed 00:08:15.022 Test: bs_unload ...[2024-07-21 11:50:13.810235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:08:15.022 passed 00:08:15.022 Test: bs_usable_clusters ...passed 00:08:15.022 Test: blob_crc ...[2024-07-21 11:50:13.878719] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:15.022 [2024-07-21 11:50:13.878881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:08:15.278 passed 00:08:15.278 Test: blob_flags ...passed 00:08:15.278 Test: bs_version ...passed 00:08:15.278 Test: blob_set_xattrs_test ...[2024-07-21 11:50:13.991421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:15.278 [2024-07-21 11:50:13.991576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6327:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:08:15.278 passed 00:08:15.548 Test: blob_thin_prov_alloc ...passed 00:08:15.548 Test: blob_insert_cluster_msg_test ...passed 00:08:15.548 Test: blob_thin_prov_rw ...passed 00:08:15.548 Test: blob_thin_prov_rle ...passed 00:08:15.548 Test: blob_thin_prov_rw_iov ...passed 00:08:15.548 Test: blob_snapshot_rw ...passed 00:08:15.548 Test: blob_snapshot_rw_iov ...passed 00:08:15.804 Test: blob_inflate_rw ...passed 00:08:16.060 Test: blob_snapshot_freeze_io ...passed 00:08:16.060 Test: blob_operation_split_rw ...passed 00:08:16.315 Test: blob_operation_split_rw_iov ...passed 00:08:16.315 Test: blob_simultaneous_operations ...[2024-07-21 11:50:15.070523] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.315 [2024-07-21 11:50:15.070741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.315 [2024-07-21 11:50:15.071532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.315 [2024-07-21 11:50:15.071581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.315 [2024-07-21 11:50:15.075090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.315 [2024-07-21 11:50:15.075150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.315 [2024-07-21 11:50:15.075254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:08:16.316 [2024-07-21 11:50:15.075277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:08:16.316 passed 00:08:16.316 Test: blob_persist_test ...passed 00:08:16.572 Test: blob_decouple_snapshot ...passed 00:08:16.572 Test: blob_seek_io_unit ...passed 00:08:16.572 Test: blob_nested_freezes ...passed 00:08:16.572 Test: blob_clone_resize ...passed 00:08:16.572 Test: blob_shallow_copy ...[2024-07-21 11:50:15.372333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:08:16.572 [2024-07-21 11:50:15.372713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:08:16.572 [2024-07-21 11:50:15.372948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7350:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:08:16.572 passed 00:08:16.572 Suite: blob_blob_copy_extent 00:08:16.572 Test: blob_write ...passed 00:08:16.828 Test: blob_read ...passed 00:08:16.828 Test: blob_rw_verify ...passed 00:08:16.828 Test: blob_rw_verify_iov_nomem ...passed 00:08:16.828 Test: blob_rw_iov_read_only ...passed 00:08:16.828 Test: blob_xattr ...passed 00:08:17.086 Test: blob_dirty_shutdown ...passed 00:08:17.086 Test: blob_is_degraded ...passed 00:08:17.086 Suite: blob_esnap_bs_copy_extent 00:08:17.086 Test: blob_esnap_create ...passed 00:08:17.086 Test: blob_esnap_thread_add_remove ...passed 00:08:17.086 Test: blob_esnap_clone_snapshot ...passed 00:08:17.086 Test: blob_esnap_clone_inflate ...passed 00:08:17.343 Test: blob_esnap_clone_decouple ...passed 00:08:17.343 Test: blob_esnap_clone_reload ...passed 00:08:17.343 Test: blob_esnap_hotplug ...passed 00:08:17.343 Test: blob_set_parent ...[2024-07-21 11:50:16.070820] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:08:17.343 [2024-07-21 11:50:16.070947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:08:17.343 [2024-07-21 11:50:16.071089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:08:17.343 [2024-07-21 11:50:16.071139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:08:17.343 [2024-07-21 11:50:16.071678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:17.343 passed 00:08:17.343 Test: blob_set_external_parent ...[2024-07-21 11:50:16.114829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:08:17.343 [2024-07-21 11:50:16.115031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7795:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:08:17.343 [2024-07-21 11:50:16.115069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:08:17.343 [2024-07-21 11:50:16.115585] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:08:17.343 passed 00:08:17.343 00:08:17.343 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.343 suites 16 16 n/a 0 0 00:08:17.343 tests 376 376 376 0 0 00:08:17.343 asserts 143965 143965 143965 0 n/a 00:08:17.343 00:08:17.343 Elapsed time = 16.123 seconds 00:08:17.600 11:50:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:08:17.600 00:08:17.600 00:08:17.600 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.600 http://cunit.sourceforge.net/ 00:08:17.600 00:08:17.600 00:08:17.600 Suite: blob_bdev 00:08:17.600 Test: create_bs_dev ...passed 00:08:17.600 Test: create_bs_dev_ro ...[2024-07-21 11:50:16.247117] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:08:17.600 passed 00:08:17.600 Test: create_bs_dev_rw ...passed 00:08:17.600 Test: claim_bs_dev ...[2024-07-21 11:50:16.247729] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:08:17.600 passed 00:08:17.600 Test: claim_bs_dev_ro ...passed 00:08:17.600 Test: deferred_destroy_refs ...passed 00:08:17.600 Test: deferred_destroy_channels ...passed 00:08:17.600 Test: deferred_destroy_threads ...passed 00:08:17.600 00:08:17.600 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.600 suites 1 1 n/a 0 0 00:08:17.600 tests 8 8 8 0 0 00:08:17.600 asserts 119 119 119 0 n/a 00:08:17.600 00:08:17.600 Elapsed time = 0.001 seconds 00:08:17.600 11:50:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:08:17.600 00:08:17.600 00:08:17.600 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.600 http://cunit.sourceforge.net/ 00:08:17.600 00:08:17.600 00:08:17.600 Suite: tree 00:08:17.600 Test: blobfs_tree_op_test ...passed 00:08:17.600 00:08:17.600 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.600 suites 1 1 n/a 0 0 00:08:17.600 tests 1 1 1 0 0 00:08:17.600 asserts 27 27 27 0 n/a 00:08:17.600 00:08:17.600 Elapsed time = 0.000 seconds 00:08:17.600 11:50:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:08:17.600 00:08:17.600 00:08:17.600 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.600 http://cunit.sourceforge.net/ 00:08:17.600 00:08:17.600 00:08:17.600 Suite: blobfs_async_ut 00:08:17.600 Test: fs_init ...passed 00:08:17.600 Test: fs_open ...passed 00:08:17.600 Test: fs_create ...passed 00:08:17.600 Test: fs_truncate ...passed 00:08:17.600 Test: fs_rename ...[2024-07-21 11:50:16.448001] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:08:17.600 passed 00:08:17.600 Test: fs_rw_async ...passed 00:08:17.856 Test: fs_writev_readv_async ...passed 00:08:17.856 Test: tree_find_buffer_ut ...passed 00:08:17.856 Test: channel_ops ...passed 00:08:17.856 Test: channel_ops_sync ...passed 00:08:17.856 00:08:17.856 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.856 suites 1 1 n/a 0 0 00:08:17.856 tests 10 10 10 0 0 00:08:17.856 asserts 292 292 292 0 n/a 00:08:17.856 00:08:17.856 Elapsed time = 0.187 seconds 00:08:17.856 11:50:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:08:17.856 00:08:17.856 00:08:17.856 CUnit - A unit testing framework for C - Version 2.1-3 00:08:17.856 http://cunit.sourceforge.net/ 00:08:17.856 00:08:17.856 00:08:17.856 Suite: blobfs_sync_ut 00:08:17.856 Test: cache_read_after_write ...[2024-07-21 11:50:16.636705] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:08:17.856 passed 00:08:17.856 Test: file_length ...passed 00:08:17.856 Test: append_write_to_extend_blob ...passed 00:08:17.856 Test: partial_buffer ...passed 00:08:17.856 Test: cache_write_null_buffer ...passed 00:08:18.114 Test: fs_create_sync ...passed 00:08:18.114 Test: fs_rename_sync ...passed 00:08:18.114 Test: cache_append_no_cache ...passed 00:08:18.114 Test: fs_delete_file_without_close ...passed 00:08:18.114 00:08:18.114 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.114 suites 1 1 n/a 0 0 00:08:18.114 tests 9 9 9 0 0 00:08:18.114 asserts 345 345 345 0 n/a 00:08:18.114 00:08:18.114 Elapsed time = 0.378 seconds 00:08:18.114 11:50:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:08:18.114 00:08:18.114 00:08:18.114 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.114 http://cunit.sourceforge.net/ 00:08:18.114 00:08:18.114 00:08:18.114 Suite: blobfs_bdev_ut 00:08:18.114 Test: spdk_blobfs_bdev_detect_test ...[2024-07-21 11:50:16.855762] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:18.114 passed 00:08:18.114 Test: spdk_blobfs_bdev_create_test ...[2024-07-21 11:50:16.856377] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:08:18.114 passed 00:08:18.114 Test: spdk_blobfs_bdev_mount_test ...passed 00:08:18.114 00:08:18.114 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.114 suites 1 1 n/a 0 0 00:08:18.114 tests 3 3 3 0 0 00:08:18.114 asserts 9 9 9 0 n/a 00:08:18.114 00:08:18.114 Elapsed time = 0.001 seconds 00:08:18.114 00:08:18.114 real 0m16.900s 00:08:18.114 user 0m16.251s 00:08:18.114 sys 0m0.856s 00:08:18.114 11:50:16 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.114 11:50:16 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:08:18.114 ************************************ 00:08:18.114 END TEST unittest_blob_blobfs 00:08:18.114 ************************************ 00:08:18.114 11:50:16 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:08:18.114 11:50:16 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:18.114 11:50:16 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.114 11:50:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:18.114 ************************************ 00:08:18.114 START TEST unittest_event 00:08:18.114 ************************************ 00:08:18.114 11:50:16 unittest.unittest_event -- common/autotest_common.sh@1121 -- # unittest_event 00:08:18.114 11:50:16 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:08:18.114 00:08:18.114 00:08:18.114 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.114 http://cunit.sourceforge.net/ 00:08:18.114 00:08:18.114 00:08:18.114 Suite: app_suite 00:08:18.114 Test: test_spdk_app_parse_args ...app_ut [options] 00:08:18.114 00:08:18.114 CPU options: 00:08:18.114 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:18.114 (like [0,1,10]) 00:08:18.114 --lcores lcore to CPU mapping list. The list is in the format: 00:08:18.114 [<,lcores[@CPUs]>...] 00:08:18.114 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:18.114 Within the group, '-' is used for range separator, 00:08:18.114 ',' is used for single number separator. 00:08:18.114 '( )' can be omitted for single element group, 00:08:18.114 '@' can be omitted if cpus and lcores have the same value 00:08:18.114 --disable-cpumask-locks Disable CPU core lock files. 00:08:18.114 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:18.114 pollers in the app support interrupt mode) 00:08:18.114 -p, --main-core main (primary) core for DPDK 00:08:18.114 00:08:18.114 Configuration options: 00:08:18.114 -c, --config, --json JSON config file 00:08:18.114 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:18.114 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:18.114 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:18.114 --rpcs-allowed comma-separated list of permitted RPCS 00:08:18.114 --json-ignore-init-errors don't exit on invalid config entry 00:08:18.114 00:08:18.114 Memory options: 00:08:18.114 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:18.114 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:18.114 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:18.114 -R, --huge-unlink unlink huge files after initialization 00:08:18.114 -n, --mem-channels number of memory channels used for DPDK 00:08:18.114 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:18.114 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:18.114 --no-huge run without using hugepages 00:08:18.115 -i, --shm-id shared memory ID (optional) 00:08:18.115 -g, --single-file-segments force creating just one hugetlbfs file 00:08:18.115 00:08:18.115 PCI options: 00:08:18.115 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:18.115 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:18.115 -u, --no-pci disable PCI access 00:08:18.115 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:18.115 00:08:18.115 Log options: 00:08:18.115 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:18.115 --silence-noticelog disable notice level logging to stderr 00:08:18.115 00:08:18.115 Trace options: 00:08:18.115 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:18.115 setting 0 to disable trace (default 32768) 00:08:18.115 Tracepoints vary in size and can use more than one trace entry. 00:08:18.115 -e, --tpoint-group [:] 00:08:18.115 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:18.115 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:18.115 a tracepoint group. First tpoint inside a group can be enabled by 00:08:18.115 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:18.115 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:18.115 in /include/spdk_internal/trace_defs.h 00:08:18.115 00:08:18.115 Other options: 00:08:18.115 -h, --help show this usage 00:08:18.115 -v, --version print SPDK version 00:08:18.115 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:18.115 --env-context Opaque context for use of the env implementation 00:08:18.115 app_ut: invalid option -- 'z' 00:08:18.115 app_ut [options] 00:08:18.115 00:08:18.115 CPU options: 00:08:18.115 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:18.115 (like [0,1,10]) 00:08:18.115 --lcores lcore to CPU mapping list. The list is in the format: 00:08:18.115 [<,lcores[@CPUs]>...] 00:08:18.115 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:18.115 Within the group, '-' is used for range separator, 00:08:18.115 ',' is used for single number separator.app_ut: unrecognized option '--test-long-opt' 00:08:18.115 00:08:18.115 '( )' can be omitted for single element group, 00:08:18.115 '@' can be omitted if cpus and lcores have the same value 00:08:18.115 --disable-cpumask-locks Disable CPU core lock files. 00:08:18.115 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:18.115 pollers in the app support interrupt mode) 00:08:18.115 -p, --main-core main (primary) core for DPDK 00:08:18.115 00:08:18.115 Configuration options: 00:08:18.115 -c, --config, --json JSON config file 00:08:18.115 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:18.115 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:18.115 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:18.115 --rpcs-allowed comma-separated list of permitted RPCS 00:08:18.115 --json-ignore-init-errors don't exit on invalid config entry 00:08:18.115 00:08:18.115 Memory options: 00:08:18.115 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:18.115 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:18.115 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:18.115 -R, --huge-unlink unlink huge files after initialization 00:08:18.115 -n, --mem-channels number of memory channels used for DPDK 00:08:18.115 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:18.115 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:18.115 --no-huge run without using hugepages 00:08:18.115 -i, --shm-id shared memory ID (optional) 00:08:18.115 -g, --single-file-segments force creating just one hugetlbfs file 00:08:18.115 00:08:18.115 PCI options: 00:08:18.115 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:18.115 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:18.115 -u, --no-pci disable PCI access 00:08:18.115 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:18.115 00:08:18.115 Log options: 00:08:18.115 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:18.115 --silence-noticelog disable notice level logging to stderr 00:08:18.115 00:08:18.115 Trace options: 00:08:18.115 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:18.115 setting 0 to disable trace (default 32768) 00:08:18.115 Tracepoints vary in size and can use more than one trace entry. 00:08:18.115 -e, --tpoint-group [:] 00:08:18.115 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:18.115 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:18.115 a tracepoint group. First tpoint inside a group can be enabled by 00:08:18.115 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:18.115 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:18.115 in /include/spdk_internal/trace_defs.h 00:08:18.115 00:08:18.115 Other options: 00:08:18.115 -h, --help show this usage 00:08:18.115 -v, --version print SPDK version 00:08:18.115 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:18.115 --env-context Opaque context for use of the env implementation 00:08:18.115 [2024-07-21 11:50:16.941671] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:08:18.115 [2024-07-21 11:50:16.942047] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:08:18.115 app_ut [options] 00:08:18.115 00:08:18.115 CPU options: 00:08:18.115 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:18.115 (like [0,1,10]) 00:08:18.115 --lcores lcore to CPU mapping list. The list is in the format: 00:08:18.115 [<,lcores[@CPUs]>...] 00:08:18.115 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:18.115 Within the group, '-' is used for range separator, 00:08:18.115 ',' is used for single number separator. 00:08:18.115 '( )' can be omitted for single element group, 00:08:18.115 '@' can be omitted if cpus and lcores have the same value 00:08:18.115 --disable-cpumask-locks Disable CPU core lock files. 00:08:18.115 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:18.115 pollers in the app support interrupt mode) 00:08:18.115 -p, --main-core main (primary) core for DPDK 00:08:18.115 00:08:18.115 Configuration options: 00:08:18.115 -c, --config, --json JSON config file 00:08:18.115 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:18.115 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:18.115 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:18.115 --rpcs-allowed comma-separated list of permitted RPCS 00:08:18.115 --json-ignore-init-errors don't exit on invalid config entry 00:08:18.115 00:08:18.115 Memory options: 00:08:18.115 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:18.115 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:18.115 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:18.115 -R, --huge-unlink unlink huge files after initialization 00:08:18.115 -n, --mem-channels number of memory channels used for DPDK 00:08:18.115 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:18.115 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:18.115 --no-huge run without using hugepages 00:08:18.115 -i, --shm-id shared memory ID (optional) 00:08:18.115 -g, --single-file-segments force creating just one hugetlbfs file 00:08:18.115 00:08:18.115 PCI options: 00:08:18.115 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:18.115 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:18.115 -u, --no-pci disable PCI access 00:08:18.115 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:18.115 00:08:18.115 Log options: 00:08:18.115 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:08:18.115 --silence-noticelog disable notice level logging to stderr 00:08:18.115 00:08:18.115 Trace options: 00:08:18.116 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:18.116 setting 0 to disable trace (default 32768) 00:08:18.116 Tracepoints vary in size and can use more than one trace entry. 00:08:18.116 -e, --tpoint-group [:] 00:08:18.116 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:08:18.116 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:18.116 a tracepoint group. First tpoint inside a group can be enabled by 00:08:18.116 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:18.116 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:18.116 in /include/spdk_internal/trace_defs.h 00:08:18.116 00:08:18.116 Other options: 00:08:18.116 -h, --help show this usage 00:08:18.116 -v, --version print SPDK version 00:08:18.116 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:18.116 --env-context Opaque context for use of the env implementation 00:08:18.116 passed 00:08:18.116 00:08:18.116 [2024-07-21 11:50:16.942346] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:08:18.116 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.116 suites 1 1 n/a 0 0 00:08:18.116 tests 1 1 1 0 0 00:08:18.116 asserts 8 8 8 0 n/a 00:08:18.116 00:08:18.116 Elapsed time = 0.001 seconds 00:08:18.116 11:50:16 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:08:18.373 00:08:18.373 00:08:18.373 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.373 http://cunit.sourceforge.net/ 00:08:18.373 00:08:18.373 00:08:18.373 Suite: app_suite 00:08:18.373 Test: test_create_reactor ...passed 00:08:18.373 Test: test_init_reactors ...passed 00:08:18.373 Test: test_event_call ...passed 00:08:18.373 Test: test_schedule_thread ...passed 00:08:18.373 Test: test_reschedule_thread ...passed 00:08:18.373 Test: test_bind_thread ...passed 00:08:18.373 Test: test_for_each_reactor ...passed 00:08:18.373 Test: test_reactor_stats ...passed 00:08:18.373 Test: test_scheduler ...passed 00:08:18.373 Test: test_governor ...passed 00:08:18.373 00:08:18.373 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.373 suites 1 1 n/a 0 0 00:08:18.373 tests 10 10 10 0 0 00:08:18.373 asserts 344 344 344 0 n/a 00:08:18.373 00:08:18.373 Elapsed time = 0.016 seconds 00:08:18.373 00:08:18.373 real 0m0.095s 00:08:18.373 user 0m0.053s 00:08:18.373 sys 0m0.043s 00:08:18.373 11:50:17 unittest.unittest_event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:18.373 ************************************ 00:08:18.373 END TEST unittest_event 00:08:18.373 ************************************ 00:08:18.373 11:50:17 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:08:18.373 11:50:17 unittest -- unit/unittest.sh@235 -- # uname -s 00:08:18.373 11:50:17 unittest -- unit/unittest.sh@235 -- # '[' Linux = Linux ']' 00:08:18.373 11:50:17 unittest -- unit/unittest.sh@236 -- # run_test unittest_ftl unittest_ftl 00:08:18.373 11:50:17 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:18.373 11:50:17 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:18.373 11:50:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:18.373 ************************************ 00:08:18.373 START TEST unittest_ftl 00:08:18.373 ************************************ 00:08:18.373 11:50:17 unittest.unittest_ftl -- common/autotest_common.sh@1121 -- # unittest_ftl 00:08:18.373 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:08:18.373 00:08:18.373 00:08:18.373 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.373 http://cunit.sourceforge.net/ 00:08:18.373 00:08:18.373 00:08:18.373 Suite: ftl_band_suite 00:08:18.373 Test: test_band_block_offset_from_addr_base ...passed 00:08:18.373 Test: test_band_block_offset_from_addr_offset ...passed 00:08:18.373 Test: test_band_addr_from_block_offset ...passed 00:08:18.373 Test: test_band_set_addr ...passed 00:08:18.630 Test: test_invalidate_addr ...passed 00:08:18.630 Test: test_next_xfer_addr ...passed 00:08:18.630 00:08:18.630 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.630 suites 1 1 n/a 0 0 00:08:18.630 tests 6 6 6 0 0 00:08:18.630 asserts 30356 30356 30356 0 n/a 00:08:18.630 00:08:18.630 Elapsed time = 0.185 seconds 00:08:18.630 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:08:18.630 00:08:18.630 00:08:18.630 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.630 http://cunit.sourceforge.net/ 00:08:18.630 00:08:18.630 00:08:18.630 Suite: ftl_bitmap 00:08:18.631 Test: test_ftl_bitmap_create ...[2024-07-21 11:50:17.365066] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:08:18.631 [2024-07-21 11:50:17.365413] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:08:18.631 passed 00:08:18.631 Test: test_ftl_bitmap_get ...passed 00:08:18.631 Test: test_ftl_bitmap_set ...passed 00:08:18.631 Test: test_ftl_bitmap_clear ...passed 00:08:18.631 Test: test_ftl_bitmap_find_first_set ...passed 00:08:18.631 Test: test_ftl_bitmap_find_first_clear ...passed 00:08:18.631 Test: test_ftl_bitmap_count_set ...passed 00:08:18.631 00:08:18.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.631 suites 1 1 n/a 0 0 00:08:18.631 tests 7 7 7 0 0 00:08:18.631 asserts 137 137 137 0 n/a 00:08:18.631 00:08:18.631 Elapsed time = 0.001 seconds 00:08:18.631 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:08:18.631 00:08:18.631 00:08:18.631 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.631 http://cunit.sourceforge.net/ 00:08:18.631 00:08:18.631 00:08:18.631 Suite: ftl_io_suite 00:08:18.631 Test: test_completion ...passed 00:08:18.631 Test: test_multiple_ios ...passed 00:08:18.631 00:08:18.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.631 suites 1 1 n/a 0 0 00:08:18.631 tests 2 2 2 0 0 00:08:18.631 asserts 47 47 47 0 n/a 00:08:18.631 00:08:18.631 Elapsed time = 0.002 seconds 00:08:18.631 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:08:18.631 00:08:18.631 00:08:18.631 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.631 http://cunit.sourceforge.net/ 00:08:18.631 00:08:18.631 00:08:18.631 Suite: ftl_mngt 00:08:18.631 Test: test_next_step ...passed 00:08:18.631 Test: test_continue_step ...passed 00:08:18.631 Test: test_get_func_and_step_cntx_alloc ...passed 00:08:18.631 Test: test_fail_step ...passed 00:08:18.631 Test: test_mngt_call_and_call_rollback ...passed 00:08:18.631 Test: test_nested_process_failure ...passed 00:08:18.631 Test: test_call_init_success ...passed 00:08:18.631 Test: test_call_init_failure ...passed 00:08:18.631 00:08:18.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.631 suites 1 1 n/a 0 0 00:08:18.631 tests 8 8 8 0 0 00:08:18.631 asserts 196 196 196 0 n/a 00:08:18.631 00:08:18.631 Elapsed time = 0.002 seconds 00:08:18.631 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:08:18.631 00:08:18.631 00:08:18.631 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.631 http://cunit.sourceforge.net/ 00:08:18.631 00:08:18.631 00:08:18.631 Suite: ftl_mempool 00:08:18.631 Test: test_ftl_mempool_create ...passed 00:08:18.631 Test: test_ftl_mempool_get_put ...passed 00:08:18.631 00:08:18.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.631 suites 1 1 n/a 0 0 00:08:18.631 tests 2 2 2 0 0 00:08:18.631 asserts 36 36 36 0 n/a 00:08:18.631 00:08:18.631 Elapsed time = 0.000 seconds 00:08:18.631 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:08:18.631 00:08:18.631 00:08:18.631 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.631 http://cunit.sourceforge.net/ 00:08:18.631 00:08:18.631 00:08:18.631 Suite: ftl_addr64_suite 00:08:18.631 Test: test_addr_cached ...passed 00:08:18.631 00:08:18.631 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.631 suites 1 1 n/a 0 0 00:08:18.631 tests 1 1 1 0 0 00:08:18.631 asserts 1536 1536 1536 0 n/a 00:08:18.631 00:08:18.631 Elapsed time = 0.000 seconds 00:08:18.888 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:08:18.888 00:08:18.888 00:08:18.888 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.888 http://cunit.sourceforge.net/ 00:08:18.888 00:08:18.888 00:08:18.888 Suite: ftl_sb 00:08:18.888 Test: test_sb_crc_v2 ...passed 00:08:18.888 Test: test_sb_crc_v3 ...passed 00:08:18.888 Test: test_sb_v3_md_layout ...[2024-07-21 11:50:17.522956] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:08:18.888 [2024-07-21 11:50:17.523358] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:18.888 [2024-07-21 11:50:17.523432] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:18.888 [2024-07-21 11:50:17.523495] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:08:18.888 [2024-07-21 11:50:17.523547] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:18.889 [2024-07-21 11:50:17.523670] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:08:18.889 [2024-07-21 11:50:17.523717] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:18.889 [2024-07-21 11:50:17.523784] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:08:18.889 [2024-07-21 11:50:17.523901] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:08:18.889 [2024-07-21 11:50:17.523969] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:18.889 passed 00:08:18.889 Test: test_sb_v5_md_layout ...[2024-07-21 11:50:17.524026] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:08:18.889 passed 00:08:18.889 00:08:18.889 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.889 suites 1 1 n/a 0 0 00:08:18.889 tests 4 4 4 0 0 00:08:18.889 asserts 160 160 160 0 n/a 00:08:18.889 00:08:18.889 Elapsed time = 0.003 seconds 00:08:18.889 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:08:18.889 00:08:18.889 00:08:18.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.889 http://cunit.sourceforge.net/ 00:08:18.889 00:08:18.889 00:08:18.889 Suite: ftl_layout_upgrade 00:08:18.889 Test: test_l2p_upgrade ...passed 00:08:18.889 00:08:18.889 Run Summary: Type Total Ran Passed Failed Inactive 00:08:18.889 suites 1 1 n/a 0 0 00:08:18.889 tests 1 1 1 0 0 00:08:18.889 asserts 152 152 152 0 n/a 00:08:18.889 00:08:18.889 Elapsed time = 0.001 seconds 00:08:18.889 11:50:17 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut 00:08:18.889 00:08:18.889 00:08:18.889 CUnit - A unit testing framework for C - Version 2.1-3 00:08:18.889 http://cunit.sourceforge.net/ 00:08:18.889 00:08:18.889 00:08:18.889 Suite: ftl_p2l_suite 00:08:18.889 Test: test_p2l_num_pages ...passed 00:08:19.453 Test: test_ckpt_issue ...passed 00:08:19.710 Test: test_persist_band_p2l ...passed 00:08:20.275 Test: test_clean_restore_p2l ...passed 00:08:21.647 Test: test_dirty_restore_p2l ...passed 00:08:21.647 00:08:21.647 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.647 suites 1 1 n/a 0 0 00:08:21.647 tests 5 5 5 0 0 00:08:21.647 asserts 10020 10020 10020 0 n/a 00:08:21.647 00:08:21.647 Elapsed time = 2.501 seconds 00:08:21.647 00:08:21.647 real 0m3.041s 00:08:21.647 user 0m1.030s 00:08:21.647 sys 0m2.007s 00:08:21.647 11:50:20 unittest.unittest_ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.647 11:50:20 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x 00:08:21.647 ************************************ 00:08:21.647 END TEST unittest_ftl 00:08:21.647 ************************************ 00:08:21.647 11:50:20 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:21.647 11:50:20 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:21.647 11:50:20 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.647 11:50:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.647 ************************************ 00:08:21.647 START TEST unittest_accel 00:08:21.647 ************************************ 00:08:21.647 11:50:20 unittest.unittest_accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:08:21.647 00:08:21.647 00:08:21.647 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.647 http://cunit.sourceforge.net/ 00:08:21.647 00:08:21.647 00:08:21.647 Suite: accel_sequence 00:08:21.647 Test: test_sequence_fill_copy ...passed 00:08:21.647 Test: test_sequence_abort ...passed 00:08:21.647 Test: test_sequence_append_error ...passed 00:08:21.647 Test: test_sequence_completion_error ...[2024-07-21 11:50:20.193124] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1931:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fe1e35287c0 00:08:21.647 [2024-07-21 11:50:20.193482] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1931:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fe1e35287c0 00:08:21.647 [2024-07-21 11:50:20.193584] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1841:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fe1e35287c0 00:08:21.647 [2024-07-21 11:50:20.193645] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1841:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fe1e35287c0 00:08:21.647 passed 00:08:21.647 Test: test_sequence_decompress ...passed 00:08:21.647 Test: test_sequence_reverse ...passed 00:08:21.647 Test: test_sequence_copy_elision ...passed 00:08:21.647 Test: test_sequence_accel_buffers ...passed 00:08:21.647 Test: test_sequence_memory_domain ...[2024-07-21 11:50:20.203428] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1733:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:08:21.647 passed 00:08:21.647 Test: test_sequence_module_memory_domain ...[2024-07-21 11:50:20.203620] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1772:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:08:21.647 passed 00:08:21.647 Test: test_sequence_crypto ...passed 00:08:21.647 Test: test_sequence_driver ...[2024-07-21 11:50:20.209407] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1880:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fe1e26b87c0 using driver: ut 00:08:21.647 [2024-07-21 11:50:20.209523] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1944:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fe1e26b87c0 through driver: ut 00:08:21.647 passed 00:08:21.647 Test: test_sequence_same_iovs ...passed 00:08:21.647 Test: test_sequence_crc32 ...passed 00:08:21.647 Suite: accel 00:08:21.647 Test: test_spdk_accel_task_complete ...passed 00:08:21.647 Test: test_get_task ...passed 00:08:21.647 Test: test_spdk_accel_submit_copy ...passed 00:08:21.647 Test: test_spdk_accel_submit_dualcast ...passed 00:08:21.647 Test: test_spdk_accel_submit_compare ...passed 00:08:21.647 Test: test_spdk_accel_submit_fill ...[2024-07-21 11:50:20.213807] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:21.647 [2024-07-21 11:50:20.213876] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 416:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:08:21.647 passed 00:08:21.647 Test: test_spdk_accel_submit_crc32c ...passed 00:08:21.647 Test: test_spdk_accel_submit_crc32cv ...passed 00:08:21.647 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:08:21.647 Test: test_spdk_accel_submit_xor ...passed 00:08:21.647 Test: test_spdk_accel_module_find_by_name ...passed 00:08:21.647 Test: test_spdk_accel_module_register ...passed 00:08:21.647 00:08:21.647 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.647 suites 2 2 n/a 0 0 00:08:21.647 tests 26 26 26 0 0 00:08:21.647 asserts 830 830 830 0 n/a 00:08:21.647 00:08:21.647 Elapsed time = 0.030 seconds 00:08:21.647 00:08:21.647 real 0m0.070s 00:08:21.647 user 0m0.047s 00:08:21.647 sys 0m0.023s 00:08:21.647 11:50:20 unittest.unittest_accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.648 11:50:20 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:08:21.648 ************************************ 00:08:21.648 END TEST unittest_accel 00:08:21.648 ************************************ 00:08:21.648 11:50:20 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.648 ************************************ 00:08:21.648 START TEST unittest_ioat 00:08:21.648 ************************************ 00:08:21.648 11:50:20 unittest.unittest_ioat -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:08:21.648 00:08:21.648 00:08:21.648 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.648 http://cunit.sourceforge.net/ 00:08:21.648 00:08:21.648 00:08:21.648 Suite: ioat 00:08:21.648 Test: ioat_state_check ...passed 00:08:21.648 00:08:21.648 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.648 suites 1 1 n/a 0 0 00:08:21.648 tests 1 1 1 0 0 00:08:21.648 asserts 32 32 32 0 n/a 00:08:21.648 00:08:21.648 Elapsed time = 0.000 seconds 00:08:21.648 00:08:21.648 real 0m0.030s 00:08:21.648 user 0m0.020s 00:08:21.648 sys 0m0.011s 00:08:21.648 11:50:20 unittest.unittest_ioat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.648 11:50:20 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:08:21.648 ************************************ 00:08:21.648 END TEST unittest_ioat 00:08:21.648 ************************************ 00:08:21.648 11:50:20 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:21.648 11:50:20 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.648 ************************************ 00:08:21.648 START TEST unittest_idxd_user 00:08:21.648 ************************************ 00:08:21.648 11:50:20 unittest.unittest_idxd_user -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:08:21.648 00:08:21.648 00:08:21.648 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.648 http://cunit.sourceforge.net/ 00:08:21.648 00:08:21.648 00:08:21.648 Suite: idxd_user 00:08:21.648 Test: test_idxd_wait_cmd ...[2024-07-21 11:50:20.391411] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:21.648 [2024-07-21 11:50:20.392223] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:08:21.648 passed 00:08:21.648 Test: test_idxd_reset_dev ...[2024-07-21 11:50:20.392537] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:08:21.648 [2024-07-21 11:50:20.392720] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:08:21.648 passed 00:08:21.648 Test: test_idxd_group_config ...passed 00:08:21.648 Test: test_idxd_wq_config ...passed 00:08:21.648 00:08:21.648 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.648 suites 1 1 n/a 0 0 00:08:21.648 tests 4 4 4 0 0 00:08:21.648 asserts 20 20 20 0 n/a 00:08:21.648 00:08:21.648 Elapsed time = 0.001 seconds 00:08:21.648 00:08:21.648 real 0m0.035s 00:08:21.648 user 0m0.016s 00:08:21.648 sys 0m0.019s 00:08:21.648 11:50:20 unittest.unittest_idxd_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.648 11:50:20 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:08:21.648 ************************************ 00:08:21.648 END TEST unittest_idxd_user 00:08:21.648 ************************************ 00:08:21.648 11:50:20 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.648 11:50:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.648 ************************************ 00:08:21.648 START TEST unittest_iscsi 00:08:21.648 ************************************ 00:08:21.648 11:50:20 unittest.unittest_iscsi -- common/autotest_common.sh@1121 -- # unittest_iscsi 00:08:21.648 11:50:20 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:08:21.648 00:08:21.648 00:08:21.648 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.648 http://cunit.sourceforge.net/ 00:08:21.648 00:08:21.648 00:08:21.648 Suite: conn_suite 00:08:21.648 Test: read_task_split_in_order_case ...passed 00:08:21.648 Test: read_task_split_reverse_order_case ...passed 00:08:21.648 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:08:21.648 Test: process_non_read_task_completion_test ...passed 00:08:21.648 Test: free_tasks_on_connection ...passed 00:08:21.648 Test: free_tasks_with_queued_datain ...passed 00:08:21.648 Test: abort_queued_datain_task_test ...passed 00:08:21.648 Test: abort_queued_datain_tasks_test ...passed 00:08:21.648 00:08:21.648 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.648 suites 1 1 n/a 0 0 00:08:21.648 tests 8 8 8 0 0 00:08:21.648 asserts 230 230 230 0 n/a 00:08:21.648 00:08:21.648 Elapsed time = 0.000 seconds 00:08:21.648 11:50:20 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:08:21.906 00:08:21.906 00:08:21.906 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.906 http://cunit.sourceforge.net/ 00:08:21.906 00:08:21.906 00:08:21.906 Suite: iscsi_suite 00:08:21.906 Test: param_negotiation_test ...passed 00:08:21.906 Test: list_negotiation_test ...passed 00:08:21.906 Test: parse_valid_test ...passed 00:08:21.906 Test: parse_invalid_test ...[2024-07-21 11:50:20.520092] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:21.906 [2024-07-21 11:50:20.520520] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:08:21.906 [2024-07-21 11:50:20.520585] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:08:21.906 [2024-07-21 11:50:20.520670] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:08:21.906 [2024-07-21 11:50:20.520849] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:08:21.906 [2024-07-21 11:50:20.520926] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:08:21.906 passed 00:08:21.906 00:08:21.906 [2024-07-21 11:50:20.521054] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:08:21.906 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.906 suites 1 1 n/a 0 0 00:08:21.906 tests 4 4 4 0 0 00:08:21.906 asserts 161 161 161 0 n/a 00:08:21.906 00:08:21.906 Elapsed time = 0.005 seconds 00:08:21.906 11:50:20 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:08:21.906 00:08:21.906 00:08:21.906 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.906 http://cunit.sourceforge.net/ 00:08:21.906 00:08:21.906 00:08:21.906 Suite: iscsi_target_node_suite 00:08:21.906 Test: add_lun_test_cases ...[2024-07-21 11:50:20.553214] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:08:21.906 [2024-07-21 11:50:20.553522] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:08:21.906 passed 00:08:21.906 Test: allow_any_allowed ...passed 00:08:21.906 Test: allow_ipv6_allowed ...[2024-07-21 11:50:20.553607] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:21.906 [2024-07-21 11:50:20.553646] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:08:21.906 [2024-07-21 11:50:20.553677] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:08:21.906 passed 00:08:21.906 Test: allow_ipv6_denied ...passed 00:08:21.906 Test: allow_ipv6_invalid ...passed 00:08:21.906 Test: allow_ipv4_allowed ...passed 00:08:21.906 Test: allow_ipv4_denied ...passed 00:08:21.906 Test: allow_ipv4_invalid ...passed 00:08:21.906 Test: node_access_allowed ...passed 00:08:21.906 Test: node_access_denied_by_empty_netmask ...passed 00:08:21.906 Test: node_access_multi_initiator_groups_cases ...passed 00:08:21.906 Test: allow_iscsi_name_multi_maps_case ...passed 00:08:21.906 Test: chap_param_test_cases ...[2024-07-21 11:50:20.554011] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:08:21.906 [2024-07-21 11:50:20.554050] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:08:21.906 [2024-07-21 11:50:20.554112] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:08:21.906 [2024-07-21 11:50:20.554152] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:08:21.906 passed 00:08:21.906 00:08:21.906 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.906 suites 1 1 n/a 0 0 00:08:21.906 tests 13 13 13 0 0 00:08:21.906 asserts 50 50 50 0 n/a 00:08:21.906 00:08:21.906 Elapsed time = 0.001 seconds 00:08:21.906 [2024-07-21 11:50:20.554188] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:08:21.906 11:50:20 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:08:21.906 00:08:21.906 00:08:21.906 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.906 http://cunit.sourceforge.net/ 00:08:21.906 00:08:21.906 00:08:21.906 Suite: iscsi_suite 00:08:21.906 Test: op_login_check_target_test ...[2024-07-21 11:50:20.593421] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:08:21.906 passed 00:08:21.906 Test: op_login_session_normal_test ...[2024-07-21 11:50:20.593779] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:21.906 [2024-07-21 11:50:20.593842] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:21.906 [2024-07-21 11:50:20.593895] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:08:21.906 [2024-07-21 11:50:20.593966] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:08:21.906 [2024-07-21 11:50:20.594082] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:21.906 [2024-07-21 11:50:20.594211] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:08:21.906 passed 00:08:21.906 Test: maxburstlength_test ...[2024-07-21 11:50:20.594280] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:08:21.906 [2024-07-21 11:50:20.594586] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:21.906 [2024-07-21 11:50:20.594660] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4554:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:08:21.906 passed 00:08:21.906 Test: underflow_for_read_transfer_test ...passed 00:08:21.907 Test: underflow_for_zero_read_transfer_test ...passed 00:08:21.907 Test: underflow_for_request_sense_test ...passed 00:08:21.907 Test: underflow_for_check_condition_test ...passed 00:08:21.907 Test: add_transfer_task_test ...passed 00:08:21.907 Test: get_transfer_task_test ...passed 00:08:21.907 Test: del_transfer_task_test ...passed 00:08:21.907 Test: clear_all_transfer_tasks_test ...passed 00:08:21.907 Test: build_iovs_test ...passed 00:08:21.907 Test: build_iovs_with_md_test ...passed 00:08:21.907 Test: pdu_hdr_op_login_test ...[2024-07-21 11:50:20.596081] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:08:21.907 [2024-07-21 11:50:20.596218] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:08:21.907 [2024-07-21 11:50:20.596301] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:08:21.907 passed 00:08:21.907 Test: pdu_hdr_op_text_test ...[2024-07-21 11:50:20.596417] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2246:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:21.907 [2024-07-21 11:50:20.596521] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2278:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:08:21.907 [2024-07-21 11:50:20.596578] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2291:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:08:21.907 passed 00:08:21.907 Test: pdu_hdr_op_logout_test ...passed 00:08:21.907 Test: pdu_hdr_op_scsi_test ...[2024-07-21 11:50:20.596660] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2521:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:08:21.907 [2024-07-21 11:50:20.596831] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:21.907 [2024-07-21 11:50:20.596885] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3342:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:08:21.907 [2024-07-21 11:50:20.596941] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3370:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:08:21.907 [2024-07-21 11:50:20.597034] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3403:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:08:21.907 [2024-07-21 11:50:20.597130] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3410:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:08:21.907 [2024-07-21 11:50:20.597316] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3434:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:08:21.907 passed 00:08:21.907 Test: pdu_hdr_op_task_mgmt_test ...[2024-07-21 11:50:20.597430] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3611:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:08:21.907 [2024-07-21 11:50:20.597563] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3700:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:08:21.907 passed 00:08:21.907 Test: pdu_hdr_op_nopout_test ...[2024-07-21 11:50:20.597787] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3719:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:08:21.907 [2024-07-21 11:50:20.597920] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:21.907 passed 00:08:21.907 Test: pdu_hdr_op_data_test ...[2024-07-21 11:50:20.597965] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3741:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:08:21.907 [2024-07-21 11:50:20.598011] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3749:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:08:21.907 [2024-07-21 11:50:20.598052] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4192:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:08:21.907 [2024-07-21 11:50:20.598124] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4209:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:08:21.907 [2024-07-21 11:50:20.598198] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4217:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:08:21.907 [2024-07-21 11:50:20.598258] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:08:21.907 [2024-07-21 11:50:20.598337] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4228:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:08:21.907 [2024-07-21 11:50:20.598429] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4239:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:08:21.907 [2024-07-21 11:50:20.598498] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4249:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:08:21.907 passed 00:08:21.907 Test: empty_text_with_cbit_test ...passed 00:08:21.907 Test: pdu_payload_read_test ...[2024-07-21 11:50:20.600326] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4637:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:08:21.907 passed 00:08:21.907 Test: data_out_pdu_sequence_test ...passed 00:08:21.907 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:08:21.907 00:08:21.907 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.907 suites 1 1 n/a 0 0 00:08:21.907 tests 24 24 24 0 0 00:08:21.907 asserts 150253 150253 150253 0 n/a 00:08:21.907 00:08:21.907 Elapsed time = 0.015 seconds 00:08:21.907 11:50:20 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:08:21.907 00:08:21.907 00:08:21.907 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.907 http://cunit.sourceforge.net/ 00:08:21.907 00:08:21.907 00:08:21.907 Suite: init_grp_suite 00:08:21.907 Test: create_initiator_group_success_case ...passed 00:08:21.907 Test: find_initiator_group_success_case ...passed 00:08:21.907 Test: register_initiator_group_twice_case ...passed 00:08:21.907 Test: add_initiator_name_success_case ...passed 00:08:21.907 Test: add_initiator_name_fail_case ...[2024-07-21 11:50:20.641446] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:08:21.907 passed 00:08:21.907 Test: delete_all_initiator_names_success_case ...passed 00:08:21.907 Test: add_netmask_success_case ...passed 00:08:21.907 Test: add_netmask_fail_case ...[2024-07-21 11:50:20.641921] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:08:21.907 passed 00:08:21.907 Test: delete_all_netmasks_success_case ...passed 00:08:21.907 Test: initiator_name_overwrite_all_to_any_case ...passed 00:08:21.907 Test: netmask_overwrite_all_to_any_case ...passed 00:08:21.907 Test: add_delete_initiator_names_case ...passed 00:08:21.907 Test: add_duplicated_initiator_names_case ...passed 00:08:21.907 Test: delete_nonexisting_initiator_names_case ...passed 00:08:21.907 Test: add_delete_netmasks_case ...passed 00:08:21.907 Test: add_duplicated_netmasks_case ...passed 00:08:21.907 Test: delete_nonexisting_netmasks_case ...passed 00:08:21.907 00:08:21.907 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.907 suites 1 1 n/a 0 0 00:08:21.907 tests 17 17 17 0 0 00:08:21.907 asserts 108 108 108 0 n/a 00:08:21.907 00:08:21.907 Elapsed time = 0.001 seconds 00:08:21.907 11:50:20 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:08:21.907 00:08:21.907 00:08:21.907 CUnit - A unit testing framework for C - Version 2.1-3 00:08:21.907 http://cunit.sourceforge.net/ 00:08:21.907 00:08:21.907 00:08:21.907 Suite: portal_grp_suite 00:08:21.907 Test: portal_create_ipv4_normal_case ...passed 00:08:21.907 Test: portal_create_ipv6_normal_case ...passed 00:08:21.907 Test: portal_create_ipv4_wildcard_case ...passed 00:08:21.907 Test: portal_create_ipv6_wildcard_case ...passed 00:08:21.907 Test: portal_create_twice_case ...[2024-07-21 11:50:20.675426] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:08:21.907 passed 00:08:21.907 Test: portal_grp_register_unregister_case ...passed 00:08:21.907 Test: portal_grp_register_twice_case ...passed 00:08:21.907 Test: portal_grp_add_delete_case ...passed 00:08:21.907 Test: portal_grp_add_delete_twice_case ...passed 00:08:21.907 00:08:21.907 Run Summary: Type Total Ran Passed Failed Inactive 00:08:21.907 suites 1 1 n/a 0 0 00:08:21.907 tests 9 9 9 0 0 00:08:21.907 asserts 44 44 44 0 n/a 00:08:21.907 00:08:21.907 Elapsed time = 0.004 seconds 00:08:21.907 00:08:21.907 real 0m0.236s 00:08:21.907 user 0m0.138s 00:08:21.907 sys 0m0.102s 00:08:21.908 11:50:20 unittest.unittest_iscsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:21.908 11:50:20 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:08:21.908 ************************************ 00:08:21.908 END TEST unittest_iscsi 00:08:21.908 ************************************ 00:08:21.908 11:50:20 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:08:21.908 11:50:20 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:21.908 11:50:20 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:21.908 11:50:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:21.908 ************************************ 00:08:21.908 START TEST unittest_json 00:08:21.908 ************************************ 00:08:21.908 11:50:20 unittest.unittest_json -- common/autotest_common.sh@1121 -- # unittest_json 00:08:21.908 11:50:20 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:08:22.164 00:08:22.164 00:08:22.164 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.164 http://cunit.sourceforge.net/ 00:08:22.164 00:08:22.164 00:08:22.164 Suite: json 00:08:22.164 Test: test_parse_literal ...passed 00:08:22.164 Test: test_parse_string_simple ...passed 00:08:22.164 Test: test_parse_string_control_chars ...passed 00:08:22.164 Test: test_parse_string_utf8 ...passed 00:08:22.164 Test: test_parse_string_escapes_twochar ...passed 00:08:22.164 Test: test_parse_string_escapes_unicode ...passed 00:08:22.164 Test: test_parse_number ...passed 00:08:22.164 Test: test_parse_array ...passed 00:08:22.164 Test: test_parse_object ...passed 00:08:22.164 Test: test_parse_nesting ...passed 00:08:22.164 Test: test_parse_comment ...passed 00:08:22.164 00:08:22.164 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.164 suites 1 1 n/a 0 0 00:08:22.164 tests 11 11 11 0 0 00:08:22.164 asserts 1516 1516 1516 0 n/a 00:08:22.164 00:08:22.164 Elapsed time = 0.001 seconds 00:08:22.164 11:50:20 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:08:22.164 00:08:22.164 00:08:22.164 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.164 http://cunit.sourceforge.net/ 00:08:22.164 00:08:22.164 00:08:22.164 Suite: json 00:08:22.164 Test: test_strequal ...passed 00:08:22.164 Test: test_num_to_uint16 ...passed 00:08:22.164 Test: test_num_to_int32 ...passed 00:08:22.164 Test: test_num_to_uint64 ...passed 00:08:22.164 Test: test_decode_object ...passed 00:08:22.164 Test: test_decode_array ...passed 00:08:22.164 Test: test_decode_bool ...passed 00:08:22.164 Test: test_decode_uint16 ...passed 00:08:22.164 Test: test_decode_int32 ...passed 00:08:22.164 Test: test_decode_uint32 ...passed 00:08:22.164 Test: test_decode_uint64 ...passed 00:08:22.164 Test: test_decode_string ...passed 00:08:22.164 Test: test_decode_uuid ...passed 00:08:22.164 Test: test_find ...passed 00:08:22.164 Test: test_find_array ...passed 00:08:22.164 Test: test_iterating ...passed 00:08:22.164 Test: test_free_object ...passed 00:08:22.164 00:08:22.164 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.164 suites 1 1 n/a 0 0 00:08:22.164 tests 17 17 17 0 0 00:08:22.164 asserts 236 236 236 0 n/a 00:08:22.164 00:08:22.164 Elapsed time = 0.001 seconds 00:08:22.164 11:50:20 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:08:22.164 00:08:22.164 00:08:22.164 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.164 http://cunit.sourceforge.net/ 00:08:22.164 00:08:22.164 00:08:22.164 Suite: json 00:08:22.164 Test: test_write_literal ...passed 00:08:22.164 Test: test_write_string_simple ...passed 00:08:22.164 Test: test_write_string_escapes ...passed 00:08:22.165 Test: test_write_string_utf16le ...passed 00:08:22.165 Test: test_write_number_int32 ...passed 00:08:22.165 Test: test_write_number_uint32 ...passed 00:08:22.165 Test: test_write_number_uint128 ...passed 00:08:22.165 Test: test_write_string_number_uint128 ...passed 00:08:22.165 Test: test_write_number_int64 ...passed 00:08:22.165 Test: test_write_number_uint64 ...passed 00:08:22.165 Test: test_write_number_double ...passed 00:08:22.165 Test: test_write_uuid ...passed 00:08:22.165 Test: test_write_array ...passed 00:08:22.165 Test: test_write_object ...passed 00:08:22.165 Test: test_write_nesting ...passed 00:08:22.165 Test: test_write_val ...passed 00:08:22.165 00:08:22.165 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.165 suites 1 1 n/a 0 0 00:08:22.165 tests 16 16 16 0 0 00:08:22.165 asserts 918 918 918 0 n/a 00:08:22.165 00:08:22.165 Elapsed time = 0.005 seconds 00:08:22.165 11:50:20 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:08:22.165 00:08:22.165 00:08:22.165 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.165 http://cunit.sourceforge.net/ 00:08:22.165 00:08:22.165 00:08:22.165 Suite: jsonrpc 00:08:22.165 Test: test_parse_request ...passed 00:08:22.165 Test: test_parse_request_streaming ...passed 00:08:22.165 00:08:22.165 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.165 suites 1 1 n/a 0 0 00:08:22.165 tests 2 2 2 0 0 00:08:22.165 asserts 289 289 289 0 n/a 00:08:22.165 00:08:22.165 Elapsed time = 0.004 seconds 00:08:22.165 00:08:22.165 real 0m0.129s 00:08:22.165 user 0m0.059s 00:08:22.165 sys 0m0.072s 00:08:22.165 11:50:20 unittest.unittest_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.165 11:50:20 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:08:22.165 ************************************ 00:08:22.165 END TEST unittest_json 00:08:22.165 ************************************ 00:08:22.165 11:50:20 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:08:22.165 11:50:20 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:22.165 11:50:20 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.165 11:50:20 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:22.165 ************************************ 00:08:22.165 START TEST unittest_rpc 00:08:22.165 ************************************ 00:08:22.165 11:50:20 unittest.unittest_rpc -- common/autotest_common.sh@1121 -- # unittest_rpc 00:08:22.165 11:50:20 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:08:22.165 00:08:22.165 00:08:22.165 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.165 http://cunit.sourceforge.net/ 00:08:22.165 00:08:22.165 00:08:22.165 Suite: rpc 00:08:22.165 Test: test_jsonrpc_handler ...passed 00:08:22.165 Test: test_spdk_rpc_is_method_allowed ...passed 00:08:22.165 Test: test_rpc_get_methods ...[2024-07-21 11:50:20.955091] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:08:22.165 passed 00:08:22.165 Test: test_rpc_spdk_get_version ...passed 00:08:22.165 Test: test_spdk_rpc_listen_close ...passed 00:08:22.165 Test: test_rpc_run_multiple_servers ...passed 00:08:22.165 00:08:22.165 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.165 suites 1 1 n/a 0 0 00:08:22.165 tests 6 6 6 0 0 00:08:22.165 asserts 23 23 23 0 n/a 00:08:22.165 00:08:22.165 Elapsed time = 0.001 seconds 00:08:22.165 00:08:22.165 real 0m0.033s 00:08:22.165 user 0m0.016s 00:08:22.165 sys 0m0.017s 00:08:22.165 11:50:20 unittest.unittest_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.165 11:50:20 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.165 ************************************ 00:08:22.165 END TEST unittest_rpc 00:08:22.165 ************************************ 00:08:22.165 11:50:21 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:22.165 11:50:21 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:22.165 11:50:21 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.165 11:50:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:22.165 ************************************ 00:08:22.165 START TEST unittest_notify 00:08:22.165 ************************************ 00:08:22.165 11:50:21 unittest.unittest_notify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:08:22.422 00:08:22.422 00:08:22.422 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.422 http://cunit.sourceforge.net/ 00:08:22.422 00:08:22.422 00:08:22.422 Suite: app_suite 00:08:22.422 Test: notify ...passed 00:08:22.422 00:08:22.422 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.422 suites 1 1 n/a 0 0 00:08:22.422 tests 1 1 1 0 0 00:08:22.422 asserts 13 13 13 0 n/a 00:08:22.422 00:08:22.422 Elapsed time = 0.000 seconds 00:08:22.422 00:08:22.422 real 0m0.034s 00:08:22.422 user 0m0.012s 00:08:22.422 sys 0m0.022s 00:08:22.422 11:50:21 unittest.unittest_notify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.422 11:50:21 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:08:22.422 ************************************ 00:08:22.422 END TEST unittest_notify 00:08:22.422 ************************************ 00:08:22.422 11:50:21 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:08:22.422 11:50:21 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:22.422 11:50:21 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.422 11:50:21 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:22.422 ************************************ 00:08:22.422 START TEST unittest_nvme 00:08:22.422 ************************************ 00:08:22.422 11:50:21 unittest.unittest_nvme -- common/autotest_common.sh@1121 -- # unittest_nvme 00:08:22.422 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:08:22.422 00:08:22.422 00:08:22.422 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.422 http://cunit.sourceforge.net/ 00:08:22.422 00:08:22.422 00:08:22.422 Suite: nvme 00:08:22.422 Test: test_opc_data_transfer ...passed 00:08:22.422 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:08:22.422 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:08:22.422 Test: test_trid_parse_and_compare ...[2024-07-21 11:50:21.128250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1176:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:08:22.422 [2024-07-21 11:50:21.128668] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:22.422 [2024-07-21 11:50:21.128820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1188:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:08:22.422 [2024-07-21 11:50:21.128883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:22.422 [2024-07-21 11:50:21.128937] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without value 00:08:22.422 [2024-07-21 11:50:21.129061] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1233:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:08:22.422 passed 00:08:22.422 Test: test_trid_trtype_str ...passed 00:08:22.422 Test: test_trid_adrfam_str ...passed 00:08:22.422 Test: test_nvme_ctrlr_probe ...[2024-07-21 11:50:21.129462] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:22.422 passed 00:08:22.422 Test: test_spdk_nvme_probe ...[2024-07-21 11:50:21.129591] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:22.422 [2024-07-21 11:50:21.129648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:22.422 [2024-07-21 11:50:21.129800] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:08:22.422 [2024-07-21 11:50:21.129873] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:08:22.422 passed 00:08:22.423 Test: test_spdk_nvme_connect ...[2024-07-21 11:50:21.130003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 994:spdk_nvme_connect: *ERROR*: No transport ID specified 00:08:22.423 [2024-07-21 11:50:21.130486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:22.423 passed 00:08:22.423 Test: test_nvme_ctrlr_probe_internal ...[2024-07-21 11:50:21.130608] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1005:spdk_nvme_connect: *ERROR*: Create probe context failed 00:08:22.423 [2024-07-21 11:50:21.130786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:08:22.423 [2024-07-21 11:50:21.130848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:08:22.423 passed 00:08:22.423 Test: test_nvme_init_controllers ...[2024-07-21 11:50:21.130974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:08:22.423 passed 00:08:22.423 Test: test_nvme_driver_init ...[2024-07-21 11:50:21.131132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:08:22.423 [2024-07-21 11:50:21.131206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:08:22.423 [2024-07-21 11:50:21.239833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:08:22.423 passed 00:08:22.423 Test: test_spdk_nvme_detach ...passed 00:08:22.423 Test: test_nvme_completion_poll_cb ...passed 00:08:22.423 Test: test_nvme_user_copy_cmd_complete ...passed 00:08:22.423 Test: test_nvme_allocate_request_null ...passed 00:08:22.423 Test: test_nvme_allocate_request ...passed 00:08:22.423 Test: test_nvme_free_request ...passed[2024-07-21 11:50:21.240054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:08:22.423 00:08:22.423 Test: test_nvme_allocate_request_user_copy ...passed 00:08:22.423 Test: test_nvme_robust_mutex_init_shared ...passed 00:08:22.423 Test: test_nvme_request_check_timeout ...passed 00:08:22.423 Test: test_nvme_wait_for_completion ...passed 00:08:22.423 Test: test_spdk_nvme_parse_func ...passed 00:08:22.423 Test: test_spdk_nvme_detach_async ...passed 00:08:22.423 Test: test_nvme_parse_addr ...[2024-07-21 11:50:21.240753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1586:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:08:22.423 passed 00:08:22.423 00:08:22.423 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.423 suites 1 1 n/a 0 0 00:08:22.423 tests 25 25 25 0 0 00:08:22.423 asserts 326 326 326 0 n/a 00:08:22.423 00:08:22.423 Elapsed time = 0.006 seconds 00:08:22.423 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:08:22.423 00:08:22.423 00:08:22.423 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.423 http://cunit.sourceforge.net/ 00:08:22.423 00:08:22.423 00:08:22.423 Suite: nvme_ctrlr 00:08:22.423 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-21 11:50:21.278606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.423 passed 00:08:22.423 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-21 11:50:21.280751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.423 passed 00:08:22.423 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-21 11:50:21.282084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.423 passed 00:08:22.423 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-21 11:50:21.283387] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.423 passed 00:08:22.423 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-21 11:50:21.284717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.681 [2024-07-21 11:50:21.285935] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-21 11:50:21.287197] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-21 11:50:21.288413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:22.681 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-21 11:50:21.290857] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.681 [2024-07-21 11:50:21.293167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-21 11:50:21.294383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:22.681 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-21 11:50:21.297213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.681 [2024-07-21 11:50:21.298561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-21 11:50:21.301084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3948:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:08:22.681 Test: test_nvme_ctrlr_init_delay ...[2024-07-21 11:50:21.303948] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.681 passed 00:08:22.681 Test: test_alloc_io_qpair_rr_1 ...[2024-07-21 11:50:21.305682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.681 [2024-07-21 11:50:21.306072] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:22.681 [2024-07-21 11:50:21.306428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:22.681 [2024-07-21 11:50:21.306696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:22.681 [2024-07-21 11:50:21.306890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 399:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:08:22.681 passed 00:08:22.681 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:08:22.681 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:08:22.681 Test: test_alloc_io_qpair_wrr_1 ...[2024-07-21 11:50:21.307780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.681 passed 00:08:22.681 Test: test_alloc_io_qpair_wrr_2 ...[2024-07-21 11:50:21.308364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.681 [2024-07-21 11:50:21.308632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5342:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:08:22.681 passed 00:08:22.681 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-21 11:50:21.309273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4870:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:08:22.681 [2024-07-21 11:50:21.309585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4907:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:22.681 [2024-07-21 11:50:21.309852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4947:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:08:22.681 [2024-07-21 11:50:21.310084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4907:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:08:22.681 passed 00:08:22.681 Test: test_nvme_ctrlr_fail ...[2024-07-21 11:50:21.310564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:08:22.681 passed 00:08:22.681 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:08:22.681 Test: test_nvme_ctrlr_set_supported_features ...passed 00:08:22.681 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:08:22.681 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-21 11:50:21.311852] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:08:22.940 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:08:22.940 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:08:22.940 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-21 11:50:21.588259] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-21 11:50:21.596262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-21 11:50:21.597866] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 [2024-07-21 11:50:21.597983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2884:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:08:22.940 passed 00:08:22.940 Test: test_alloc_io_qpair_fail ...[2024-07-21 11:50:21.599528] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 [2024-07-21 11:50:21.599732] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 511:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_add_remove_process ...passed 00:08:22.940 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:08:22.940 Test: test_nvme_ctrlr_set_state ...[2024-07-21 11:50:21.600229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1479:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-21 11:50:21.600557] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-21 11:50:21.619635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-21 11:50:21.656225] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_reset ...[2024-07-21 11:50:21.658195] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_aer_callback ...[2024-07-21 11:50:21.658858] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-21 11:50:21.660567] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:08:22.940 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:08:22.940 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-21 11:50:21.662868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:08:22.940 Test: test_nvme_ctrlr_ana_resize ...[2024-07-21 11:50:21.664600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:08:22.940 Test: test_nvme_transport_ctrlr_ready ...[2024-07-21 11:50:21.666481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4030:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:08:22.940 [2024-07-21 11:50:21.666636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4081:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:08:22.940 passed 00:08:22.940 Test: test_nvme_ctrlr_disable ...[2024-07-21 11:50:21.666812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4149:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:08:22.940 passed 00:08:22.940 00:08:22.940 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.940 suites 1 1 n/a 0 0 00:08:22.940 tests 43 43 43 0 0 00:08:22.940 asserts 10418 10418 10418 0 n/a 00:08:22.940 00:08:22.940 Elapsed time = 0.337 seconds 00:08:22.940 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:08:22.940 00:08:22.940 00:08:22.940 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.940 http://cunit.sourceforge.net/ 00:08:22.940 00:08:22.940 00:08:22.940 Suite: nvme_ctrlr_cmd 00:08:22.940 Test: test_get_log_pages ...passed 00:08:22.940 Test: test_set_feature_cmd ...passed 00:08:22.940 Test: test_set_feature_ns_cmd ...passed 00:08:22.940 Test: test_get_feature_cmd ...passed 00:08:22.940 Test: test_get_feature_ns_cmd ...passed 00:08:22.940 Test: test_abort_cmd ...passed 00:08:22.940 Test: test_set_host_id_cmds ...[2024-07-21 11:50:21.717807] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:08:22.940 passed 00:08:22.940 Test: test_io_cmd_raw_no_payload_build ...passed 00:08:22.940 Test: test_io_raw_cmd ...passed 00:08:22.940 Test: test_io_raw_cmd_with_md ...passed 00:08:22.940 Test: test_namespace_attach ...passed 00:08:22.940 Test: test_namespace_detach ...passed 00:08:22.940 Test: test_namespace_create ...passed 00:08:22.940 Test: test_namespace_delete ...passed 00:08:22.940 Test: test_doorbell_buffer_config ...passed 00:08:22.940 Test: test_format_nvme ...passed 00:08:22.940 Test: test_fw_commit ...passed 00:08:22.940 Test: test_fw_image_download ...passed 00:08:22.940 Test: test_sanitize ...passed 00:08:22.940 Test: test_directive ...passed 00:08:22.940 Test: test_nvme_request_add_abort ...passed 00:08:22.940 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:08:22.940 Test: test_nvme_ctrlr_cmd_identify ...passed 00:08:22.940 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:08:22.940 00:08:22.940 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.940 suites 1 1 n/a 0 0 00:08:22.940 tests 24 24 24 0 0 00:08:22.940 asserts 198 198 198 0 n/a 00:08:22.940 00:08:22.940 Elapsed time = 0.001 seconds 00:08:22.940 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:08:22.940 00:08:22.940 00:08:22.940 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.940 http://cunit.sourceforge.net/ 00:08:22.940 00:08:22.940 00:08:22.940 Suite: nvme_ctrlr_cmd 00:08:22.940 Test: test_geometry_cmd ...passed 00:08:22.940 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:08:22.940 00:08:22.940 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.940 suites 1 1 n/a 0 0 00:08:22.940 tests 2 2 2 0 0 00:08:22.940 asserts 7 7 7 0 n/a 00:08:22.940 00:08:22.940 Elapsed time = 0.000 seconds 00:08:22.940 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:08:22.940 00:08:22.940 00:08:22.940 CUnit - A unit testing framework for C - Version 2.1-3 00:08:22.940 http://cunit.sourceforge.net/ 00:08:22.940 00:08:22.940 00:08:22.940 Suite: nvme 00:08:22.940 Test: test_nvme_ns_construct ...passed 00:08:22.940 Test: test_nvme_ns_uuid ...passed 00:08:22.940 Test: test_nvme_ns_csi ...passed 00:08:22.940 Test: test_nvme_ns_data ...passed 00:08:22.940 Test: test_nvme_ns_set_identify_data ...passed 00:08:22.940 Test: test_spdk_nvme_ns_get_values ...passed 00:08:22.940 Test: test_spdk_nvme_ns_is_active ...passed 00:08:22.940 Test: spdk_nvme_ns_supports ...passed 00:08:22.940 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:08:22.940 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:08:22.940 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:08:22.940 Test: test_nvme_ns_find_id_desc ...passed 00:08:22.940 00:08:22.940 Run Summary: Type Total Ran Passed Failed Inactive 00:08:22.940 suites 1 1 n/a 0 0 00:08:22.940 tests 12 12 12 0 0 00:08:22.940 asserts 83 83 83 0 n/a 00:08:22.940 00:08:22.940 Elapsed time = 0.001 seconds 00:08:22.940 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:08:23.199 00:08:23.199 00:08:23.199 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.199 http://cunit.sourceforge.net/ 00:08:23.199 00:08:23.199 00:08:23.199 Suite: nvme_ns_cmd 00:08:23.199 Test: split_test ...passed 00:08:23.199 Test: split_test2 ...passed 00:08:23.199 Test: split_test3 ...passed 00:08:23.199 Test: split_test4 ...passed 00:08:23.199 Test: test_nvme_ns_cmd_flush ...passed 00:08:23.199 Test: test_nvme_ns_cmd_dataset_management ...passed 00:08:23.199 Test: test_nvme_ns_cmd_copy ...passed 00:08:23.199 Test: test_io_flags ...[2024-07-21 11:50:21.822415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:08:23.199 passed 00:08:23.199 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:08:23.199 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:08:23.199 Test: test_nvme_ns_cmd_reservation_register ...passed 00:08:23.199 Test: test_nvme_ns_cmd_reservation_release ...passed 00:08:23.199 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:08:23.199 Test: test_nvme_ns_cmd_reservation_report ...passed 00:08:23.199 Test: test_cmd_child_request ...passed 00:08:23.199 Test: test_nvme_ns_cmd_readv ...passed 00:08:23.199 Test: test_nvme_ns_cmd_read_with_md ...passed 00:08:23.199 Test: test_nvme_ns_cmd_writev ...[2024-07-21 11:50:21.827308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:08:23.199 passed 00:08:23.199 Test: test_nvme_ns_cmd_write_with_md ...passed 00:08:23.199 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:08:23.199 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:08:23.199 Test: test_nvme_ns_cmd_comparev ...passed 00:08:23.199 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:08:23.199 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:08:23.199 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:08:23.199 Test: test_nvme_ns_cmd_setup_request ...passed 00:08:23.199 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:08:23.199 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-07-21 11:50:21.832753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:23.199 passed 00:08:23.199 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-07-21 11:50:21.833157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:08:23.199 passed 00:08:23.199 Test: test_nvme_ns_cmd_verify ...passed 00:08:23.199 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:08:23.199 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:08:23.199 00:08:23.199 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.199 suites 1 1 n/a 0 0 00:08:23.199 tests 32 32 32 0 0 00:08:23.199 asserts 550 550 550 0 n/a 00:08:23.199 00:08:23.199 Elapsed time = 0.008 seconds 00:08:23.199 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:08:23.199 00:08:23.199 00:08:23.199 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.199 http://cunit.sourceforge.net/ 00:08:23.199 00:08:23.199 00:08:23.199 Suite: nvme_ns_cmd 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:08:23.199 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:08:23.199 00:08:23.199 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.199 suites 1 1 n/a 0 0 00:08:23.199 tests 12 12 12 0 0 00:08:23.199 asserts 123 123 123 0 n/a 00:08:23.199 00:08:23.199 Elapsed time = 0.001 seconds 00:08:23.199 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:08:23.199 00:08:23.199 00:08:23.199 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.199 http://cunit.sourceforge.net/ 00:08:23.199 00:08:23.199 00:08:23.199 Suite: nvme_qpair 00:08:23.199 Test: test3 ...passed 00:08:23.199 Test: test_ctrlr_failed ...passed 00:08:23.199 Test: struct_packing ...passed 00:08:23.199 Test: test_nvme_qpair_process_completions ...[2024-07-21 11:50:21.904409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:23.199 [2024-07-21 11:50:21.904917] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:23.199 [2024-07-21 11:50:21.905132] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:23.200 [2024-07-21 11:50:21.905356] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:08:23.200 passed 00:08:23.200 Test: test_nvme_completion_is_retry ...passed 00:08:23.200 Test: test_get_status_string ...passed 00:08:23.200 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:08:23.200 Test: test_nvme_qpair_submit_request ...passed 00:08:23.200 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:08:23.200 Test: test_nvme_qpair_manual_complete_request ...passed 00:08:23.200 Test: test_nvme_qpair_init_deinit ...[2024-07-21 11:50:21.906987] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:08:23.200 passed 00:08:23.200 Test: test_nvme_get_sgl_print_info ...passed 00:08:23.200 00:08:23.200 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.200 suites 1 1 n/a 0 0 00:08:23.200 tests 12 12 12 0 0 00:08:23.200 asserts 154 154 154 0 n/a 00:08:23.200 00:08:23.200 Elapsed time = 0.002 seconds 00:08:23.200 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:08:23.200 00:08:23.200 00:08:23.200 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.200 http://cunit.sourceforge.net/ 00:08:23.200 00:08:23.200 00:08:23.200 Suite: nvme_pcie 00:08:23.200 Test: test_prp_list_append ...[2024-07-21 11:50:21.937161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:23.200 [2024-07-21 11:50:21.937628] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:08:23.200 [2024-07-21 11:50:21.937829] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:08:23.200 [2024-07-21 11:50:21.938238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:23.200 [2024-07-21 11:50:21.938492] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:08:23.200 passed 00:08:23.200 Test: test_nvme_pcie_hotplug_monitor ...passed 00:08:23.200 Test: test_shadow_doorbell_update ...passed 00:08:23.200 Test: test_build_contig_hw_sgl_request ...passed 00:08:23.200 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:08:23.200 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:08:23.200 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:08:23.200 Test: test_nvme_pcie_qpair_build_contig_request ...[2024-07-21 11:50:21.939787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:08:23.200 passed 00:08:23.200 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:08:23.200 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:08:23.200 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-07-21 11:50:21.940441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:08:23.200 passed 00:08:23.200 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-07-21 11:50:21.940798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:08:23.200 passed 00:08:23.200 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-07-21 11:50:21.941143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:08:23.200 passed 00:08:23.200 Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-07-21 11:50:21.941488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:08:23.200 passed 00:08:23.200 00:08:23.200 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.200 suites 1 1 n/a 0 0 00:08:23.200 tests 14 14 14 0 0 00:08:23.200 asserts 235 235 235 0 n/a 00:08:23.200 00:08:23.200 Elapsed time = 0.002 seconds 00:08:23.200 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:08:23.200 00:08:23.200 00:08:23.200 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.200 http://cunit.sourceforge.net/ 00:08:23.200 00:08:23.200 00:08:23.200 Suite: nvme_ns_cmd 00:08:23.200 Test: nvme_poll_group_create_test ...passed 00:08:23.200 Test: nvme_poll_group_add_remove_test ...passed 00:08:23.200 Test: nvme_poll_group_process_completions ...passed 00:08:23.200 Test: nvme_poll_group_destroy_test ...passed 00:08:23.200 Test: nvme_poll_group_get_free_stats ...passed 00:08:23.200 00:08:23.200 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.200 suites 1 1 n/a 0 0 00:08:23.200 tests 5 5 5 0 0 00:08:23.200 asserts 75 75 75 0 n/a 00:08:23.200 00:08:23.200 Elapsed time = 0.001 seconds 00:08:23.200 11:50:21 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:08:23.200 00:08:23.200 00:08:23.200 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.200 http://cunit.sourceforge.net/ 00:08:23.200 00:08:23.200 00:08:23.200 Suite: nvme_quirks 00:08:23.200 Test: test_nvme_quirks_striping ...passed 00:08:23.200 00:08:23.200 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.200 suites 1 1 n/a 0 0 00:08:23.200 tests 1 1 1 0 0 00:08:23.200 asserts 5 5 5 0 n/a 00:08:23.200 00:08:23.200 Elapsed time = 0.000 seconds 00:08:23.200 11:50:22 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:08:23.200 00:08:23.200 00:08:23.200 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.200 http://cunit.sourceforge.net/ 00:08:23.200 00:08:23.200 00:08:23.200 Suite: nvme_tcp 00:08:23.200 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:08:23.200 Test: test_nvme_tcp_build_iovs ...passed 00:08:23.200 Test: test_nvme_tcp_build_sgl_request ...[2024-07-21 11:50:22.050118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7fff46f920a0, and the iovcnt=16, remaining_size=28672 00:08:23.200 passed 00:08:23.200 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:08:23.200 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:08:23.200 Test: test_nvme_tcp_req_complete_safe ...passed 00:08:23.200 Test: test_nvme_tcp_req_get ...passed 00:08:23.200 Test: test_nvme_tcp_req_init ...passed 00:08:23.200 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:08:23.200 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:08:23.200 Test: test_nvme_tcp_qpair_set_recv_state ...[2024-07-21 11:50:22.052373] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93dc0 is same with the state(6) to be set 00:08:23.200 passed 00:08:23.200 Test: test_nvme_tcp_alloc_reqs ...passed 00:08:23.200 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-07-21 11:50:22.053239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f92f70 is same with the state(5) to be set 00:08:23.200 passed 00:08:23.200 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-21 11:50:22.053532] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7fff46f93b00 00:08:23.200 [2024-07-21 11:50:22.053711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1226:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:08:23.200 [2024-07-21 11:50:22.053960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.200 [2024-07-21 11:50:22.054145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1177:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:08:23.200 [2024-07-21 11:50:22.054359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.200 [2024-07-21 11:50:22.054537] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:08:23.200 [2024-07-21 11:50:22.054782] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.200 [2024-07-21 11:50:22.054966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.200 [2024-07-21 11:50:22.055128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.200 [2024-07-21 11:50:22.055313] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.200 [2024-07-21 11:50:22.055489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.201 [2024-07-21 11:50:22.055671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f93430 is same with the state(5) to be set 00:08:23.201 passed 00:08:23.201 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-21 11:50:22.056181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:08:23.201 [2024-07-21 11:50:22.056376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:23.201 [2024-07-21 11:50:22.056777] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2336:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:08:23.201 passed 00:08:23.201 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:08:23.201 Test: test_nvme_tcp_c2h_payload_handle ...[2024-07-21 11:50:22.057374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff46f93640): PDU Sequence Error 00:08:23.201 passed 00:08:23.201 Test: test_nvme_tcp_icresp_handle ...[2024-07-21 11:50:22.057718] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1567:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:08:23.201 [2024-07-21 11:50:22.057907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1574:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:08:23.201 [2024-07-21 11:50:22.058074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f92f80 is same with the state(5) to be set 00:08:23.201 [2024-07-21 11:50:22.058233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:08:23.201 [2024-07-21 11:50:22.058390] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f92f80 is same with the state(5) to be set 00:08:23.201 [2024-07-21 11:50:22.058590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f92f80 is same with the state(0) to be set 00:08:23.201 passed 00:08:23.201 Test: test_nvme_tcp_pdu_payload_handle ...[2024-07-21 11:50:22.058947] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1341:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7fff46f93b00): PDU Sequence Error 00:08:23.201 passed 00:08:23.201 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-07-21 11:50:22.059306] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1644:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7fff46f92240 00:08:23.201 passed 00:08:23.201 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:08:23.201 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-21 11:50:22.059912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 354:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7fff46f918c0, errno=0, rc=0 00:08:23.201 [2024-07-21 11:50:22.060088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f918c0 is same with the state(5) to be set 00:08:23.201 [2024-07-21 11:50:22.060286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fff46f918c0 is same with the state(5) to be set 00:08:23.201 [2024-07-21 11:50:22.060466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff46f918c0 (0): Success 00:08:23.201 [2024-07-21 11:50:22.060651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fff46f918c0 (0): Success 00:08:23.201 passed 00:08:23.459 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-21 11:50:22.178377] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:23.459 [2024-07-21 11:50:22.178739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:23.459 passed 00:08:23.459 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:08:23.459 Test: test_nvme_tcp_poll_group_get_stats ...[2024-07-21 11:50:22.179651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:23.459 [2024-07-21 11:50:22.179891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2955:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:23.459 passed 00:08:23.459 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-21 11:50:22.180582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2507:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:23.459 [2024-07-21 11:50:22.180819] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:23.459 [2024-07-21 11:50:22.181127] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2324:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:08:23.459 [2024-07-21 11:50:22.181382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:23.459 [2024-07-21 11:50:22.181667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23 00:08:23.459 [2024-07-21 11:50:22.181919] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:08:23.459 passed 00:08:23.459 Test: test_nvme_tcp_qpair_submit_request ...[2024-07-21 11:50:22.182466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 825:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000000c80, and the iovcnt=1, remaining_size=1024 00:08:23.459 [2024-07-21 11:50:22.182711] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1018:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:08:23.459 passed 00:08:23.459 00:08:23.459 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.459 suites 1 1 n/a 0 0 00:08:23.459 tests 27 27 27 0 0 00:08:23.459 asserts 624 624 624 0 n/a 00:08:23.459 00:08:23.459 Elapsed time = 0.125 seconds 00:08:23.459 11:50:22 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:08:23.459 00:08:23.459 00:08:23.459 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.459 http://cunit.sourceforge.net/ 00:08:23.459 00:08:23.459 00:08:23.459 Suite: nvme_transport 00:08:23.459 Test: test_nvme_get_transport ...passed 00:08:23.459 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:08:23.459 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:08:23.459 Test: test_nvme_transport_poll_group_add_remove ...passed 00:08:23.459 Test: test_ctrlr_get_memory_domains ...passed 00:08:23.459 00:08:23.459 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.459 suites 1 1 n/a 0 0 00:08:23.459 tests 5 5 5 0 0 00:08:23.459 asserts 28 28 28 0 n/a 00:08:23.459 00:08:23.459 Elapsed time = 0.000 seconds 00:08:23.459 11:50:22 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:08:23.459 00:08:23.459 00:08:23.459 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.459 http://cunit.sourceforge.net/ 00:08:23.459 00:08:23.459 00:08:23.459 Suite: nvme_io_msg 00:08:23.459 Test: test_nvme_io_msg_send ...passed 00:08:23.459 Test: test_nvme_io_msg_process ...passed 00:08:23.459 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:08:23.459 00:08:23.459 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.459 suites 1 1 n/a 0 0 00:08:23.459 tests 3 3 3 0 0 00:08:23.459 asserts 56 56 56 0 n/a 00:08:23.459 00:08:23.459 Elapsed time = 0.000 seconds 00:08:23.459 11:50:22 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:08:23.459 00:08:23.459 00:08:23.459 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.459 http://cunit.sourceforge.net/ 00:08:23.459 00:08:23.459 00:08:23.459 Suite: nvme_pcie_common 00:08:23.459 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-21 11:50:22.298648] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:08:23.459 passed 00:08:23.459 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:08:23.459 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:08:23.459 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-07-21 11:50:22.299982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:08:23.459 [2024-07-21 11:50:22.300241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:08:23.459 [2024-07-21 11:50:22.300440] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:08:23.459 passed 00:08:23.459 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed 00:08:23.459 Test: test_nvme_pcie_poll_group_get_stats ...[2024-07-21 11:50:22.301269] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:23.459 [2024-07-21 11:50:22.301414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:23.459 passed 00:08:23.459 00:08:23.459 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.459 suites 1 1 n/a 0 0 00:08:23.459 tests 6 6 6 0 0 00:08:23.459 asserts 148 148 148 0 n/a 00:08:23.459 00:08:23.459 Elapsed time = 0.002 seconds 00:08:23.459 11:50:22 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:08:23.717 00:08:23.717 00:08:23.717 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.717 http://cunit.sourceforge.net/ 00:08:23.717 00:08:23.717 00:08:23.717 Suite: nvme_fabric 00:08:23.717 Test: test_nvme_fabric_prop_set_cmd ...passed 00:08:23.717 Test: test_nvme_fabric_prop_get_cmd ...passed 00:08:23.717 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:08:23.717 Test: test_nvme_fabric_discover_probe ...passed 00:08:23.717 Test: test_nvme_fabric_qpair_connect ...[2024-07-21 11:50:22.337776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:08:23.717 passed 00:08:23.717 00:08:23.717 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.717 suites 1 1 n/a 0 0 00:08:23.717 tests 5 5 5 0 0 00:08:23.717 asserts 60 60 60 0 n/a 00:08:23.717 00:08:23.717 Elapsed time = 0.001 seconds 00:08:23.717 11:50:22 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:08:23.717 00:08:23.717 00:08:23.717 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.717 http://cunit.sourceforge.net/ 00:08:23.717 00:08:23.717 00:08:23.717 Suite: nvme_opal 00:08:23.717 Test: test_opal_nvme_security_recv_send_done ...passed 00:08:23.717 Test: test_opal_add_short_atom_header ...[2024-07-21 11:50:22.372273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:08:23.717 passed 00:08:23.717 00:08:23.717 Run Summary: Type Total Ran Passed Failed Inactive 00:08:23.717 suites 1 1 n/a 0 0 00:08:23.717 tests 2 2 2 0 0 00:08:23.717 asserts 22 22 22 0 n/a 00:08:23.717 00:08:23.717 Elapsed time = 0.000 seconds 00:08:23.717 00:08:23.717 real 0m1.277s 00:08:23.717 user 0m0.632s 00:08:23.717 sys 0m0.449s 00:08:23.717 11:50:22 unittest.unittest_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:23.717 11:50:22 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:23.717 ************************************ 00:08:23.717 END TEST unittest_nvme 00:08:23.717 ************************************ 00:08:23.717 11:50:22 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:23.717 11:50:22 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:23.717 11:50:22 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.717 11:50:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:23.717 ************************************ 00:08:23.717 START TEST unittest_log 00:08:23.717 ************************************ 00:08:23.717 11:50:22 unittest.unittest_log -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:08:23.717 00:08:23.717 00:08:23.717 CUnit - A unit testing framework for C - Version 2.1-3 00:08:23.717 http://cunit.sourceforge.net/ 00:08:23.717 00:08:23.717 00:08:23.717 Suite: log 00:08:23.717 Test: log_test ...[2024-07-21 11:50:22.461112] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:08:23.717 [2024-07-21 11:50:22.461541] log_ut.c: 57:log_test: *DEBUG*: log test 00:08:23.717 log dump test: 00:08:23.717 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:08:23.717 spdk dump test: 00:08:23.717 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:08:23.717 spdk dump test: 00:08:23.717 passed 00:08:23.717 Test: deprecation ...00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:08:23.717 00000010 65 20 63 68 61 72 73 e chars 00:08:24.650 passed 00:08:24.650 00:08:24.650 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.650 suites 1 1 n/a 0 0 00:08:24.650 tests 2 2 2 0 0 00:08:24.650 asserts 73 73 73 0 n/a 00:08:24.650 00:08:24.650 Elapsed time = 0.001 seconds 00:08:24.650 00:08:24.650 real 0m1.036s 00:08:24.650 user 0m0.030s 00:08:24.650 sys 0m0.004s 00:08:24.650 11:50:23 unittest.unittest_log -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.650 11:50:23 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:08:24.650 ************************************ 00:08:24.650 END TEST unittest_log 00:08:24.650 ************************************ 00:08:24.909 11:50:23 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:24.909 11:50:23 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:24.909 11:50:23 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.909 11:50:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:24.909 ************************************ 00:08:24.909 START TEST unittest_lvol 00:08:24.909 ************************************ 00:08:24.909 11:50:23 unittest.unittest_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:08:24.909 00:08:24.909 00:08:24.909 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.909 http://cunit.sourceforge.net/ 00:08:24.909 00:08:24.909 00:08:24.909 Suite: lvol 00:08:24.909 Test: lvs_init_unload_success ...[2024-07-21 11:50:23.564958] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:08:24.909 passed 00:08:24.909 Test: lvs_init_destroy_success ...[2024-07-21 11:50:23.565969] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:08:24.909 passed 00:08:24.909 Test: lvs_init_opts_success ...passed 00:08:24.909 Test: lvs_unload_lvs_is_null_fail ...[2024-07-21 11:50:23.566813] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:08:24.909 passed 00:08:24.909 Test: lvs_names ...[2024-07-21 11:50:23.567136] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:08:24.909 [2024-07-21 11:50:23.567370] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:08:24.909 [2024-07-21 11:50:23.567713] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:08:24.909 passed 00:08:24.909 Test: lvol_create_destroy_success ...passed 00:08:24.909 Test: lvol_create_fail ...[2024-07-21 11:50:23.568926] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:08:24.909 [2024-07-21 11:50:23.569244] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:08:24.909 passed 00:08:24.909 Test: lvol_destroy_fail ...[2024-07-21 11:50:23.569967] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:08:24.909 passed 00:08:24.909 Test: lvol_close ...[2024-07-21 11:50:23.570562] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:08:24.909 [2024-07-21 11:50:23.570808] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:08:24.909 passed 00:08:24.909 Test: lvol_resize ...passed 00:08:24.909 Test: lvol_set_read_only ...passed 00:08:24.909 Test: test_lvs_load ...[2024-07-21 11:50:23.572434] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:08:24.909 [2024-07-21 11:50:23.572628] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:08:24.909 passed 00:08:24.909 Test: lvols_load ...[2024-07-21 11:50:23.573210] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:24.909 [2024-07-21 11:50:23.573493] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:08:24.909 passed 00:08:24.909 Test: lvol_open ...passed 00:08:24.909 Test: lvol_snapshot ...passed 00:08:24.909 Test: lvol_snapshot_fail ...[2024-07-21 11:50:23.575011] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:08:24.909 passed 00:08:24.909 Test: lvol_clone ...passed 00:08:24.909 Test: lvol_clone_fail ...[2024-07-21 11:50:23.576094] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:08:24.909 passed 00:08:24.909 Test: lvol_iter_clones ...passed 00:08:24.909 Test: lvol_refcnt ...[2024-07-21 11:50:23.577193] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol fc938899-2ec9-4a19-81a9-5293cf4f295d because it is still open 00:08:24.909 passed 00:08:24.909 Test: lvol_names ...[2024-07-21 11:50:23.577787] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:24.909 [2024-07-21 11:50:23.578064] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:24.909 [2024-07-21 11:50:23.578431] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:08:24.909 passed 00:08:24.909 Test: lvol_create_thin_provisioned ...passed 00:08:24.909 Test: lvol_rename ...[2024-07-21 11:50:23.579592] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:24.909 [2024-07-21 11:50:23.579832] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:08:24.909 passed 00:08:24.909 Test: lvs_rename ...[2024-07-21 11:50:23.580444] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:08:24.909 passed 00:08:24.909 Test: lvol_inflate ...[2024-07-21 11:50:23.581031] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:24.909 passed 00:08:24.909 Test: lvol_decouple_parent ...[2024-07-21 11:50:23.581675] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:08:24.909 passed 00:08:24.909 Test: lvol_get_xattr ...passed 00:08:24.909 Test: lvol_esnap_reload ...passed 00:08:24.909 Test: lvol_esnap_create_bad_args ...[2024-07-21 11:50:23.582945] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:08:24.909 [2024-07-21 11:50:23.583114] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:08:24.909 [2024-07-21 11:50:23.583290] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:08:24.909 [2024-07-21 11:50:23.583559] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:08:24.909 [2024-07-21 11:50:23.583875] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:08:24.909 passed 00:08:24.909 Test: lvol_esnap_create_delete ...passed 00:08:24.909 Test: lvol_esnap_load_esnaps ...[2024-07-21 11:50:23.584800] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:08:24.909 passed 00:08:24.909 Test: lvol_esnap_missing ...[2024-07-21 11:50:23.585283] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:24.909 [2024-07-21 11:50:23.585494] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:08:24.909 passed 00:08:24.909 Test: lvol_esnap_hotplug ... 00:08:24.909 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:08:24.909 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:08:24.909 [2024-07-21 11:50:23.586981] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 0c98ebf9-a65b-4874-bade-549d27873605: failed to create esnap bs_dev: error -12 00:08:24.909 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:08:24.909 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:08:24.909 [2024-07-21 11:50:23.587586] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 631ba31f-1d6f-41fe-bb6b-ab035a4f8171: failed to create esnap bs_dev: error -12 00:08:24.909 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:08:24.909 [2024-07-21 11:50:23.587981] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 89bdf607-550d-4e24-b3c1-9fe2ca245a29: failed to create esnap bs_dev: error -12 00:08:24.909 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:08:24.909 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:08:24.909 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:08:24.909 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:08:24.909 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:08:24.909 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:08:24.909 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:08:24.909 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:08:24.909 passed 00:08:24.909 Test: lvol_get_by ...passed 00:08:24.909 Test: lvol_shallow_copy ...[2024-07-21 11:50:23.590737] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:08:24.909 [2024-07-21 11:50:23.590938] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 3fc9a956-e432-402a-90b5-9741894e0073 shallow copy, ext_dev must not be NULL 00:08:24.909 passed 00:08:24.909 Test: lvol_set_parent ...[2024-07-21 11:50:23.591478] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:08:24.909 [2024-07-21 11:50:23.591669] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:08:24.909 passed 00:08:24.909 Test: lvol_set_external_parent ...[2024-07-21 11:50:23.592279] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:08:24.909 [2024-07-21 11:50:23.592472] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:08:24.909 [2024-07-21 11:50:23.592672] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:08:24.909 passed 00:08:24.909 00:08:24.909 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.909 suites 1 1 n/a 0 0 00:08:24.909 tests 37 37 37 0 0 00:08:24.910 asserts 1505 1505 1505 0 n/a 00:08:24.910 00:08:24.910 Elapsed time = 0.016 seconds 00:08:24.910 00:08:24.910 real 0m0.069s 00:08:24.910 user 0m0.030s 00:08:24.910 sys 0m0.026s 00:08:24.910 11:50:23 unittest.unittest_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.910 11:50:23 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:24.910 ************************************ 00:08:24.910 END TEST unittest_lvol 00:08:24.910 ************************************ 00:08:24.910 11:50:23 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:24.910 11:50:23 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:24.910 11:50:23 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:24.910 11:50:23 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.910 11:50:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:24.910 ************************************ 00:08:24.910 START TEST unittest_nvme_rdma 00:08:24.910 ************************************ 00:08:24.910 11:50:23 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:08:24.910 00:08:24.910 00:08:24.910 CUnit - A unit testing framework for C - Version 2.1-3 00:08:24.910 http://cunit.sourceforge.net/ 00:08:24.910 00:08:24.910 00:08:24.910 Suite: nvme_rdma 00:08:24.910 Test: test_nvme_rdma_build_sgl_request ...[2024-07-21 11:50:23.692823] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:08:24.910 [2024-07-21 11:50:23.693480] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1632:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:24.910 [2024-07-21 11:50:23.693818] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1688:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:08:24.910 Test: test_nvme_rdma_build_contig_request ...[2024-07-21 11:50:23.694555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1569:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:08:24.910 Test: test_nvme_rdma_create_reqs ...[2024-07-21 11:50:23.695328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1011:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_create_rsps ...[2024-07-21 11:50:23.696158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 929:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-07-21 11:50:23.696735] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:08:24.910 [2024-07-21 11:50:23.696973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1826:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_poller_create ...passed 00:08:24.910 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-07-21 11:50:23.697845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 530:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_ctrlr_construct ...passed 00:08:24.910 Test: test_nvme_rdma_req_put_and_get ...passed 00:08:24.910 Test: test_nvme_rdma_req_init ...passed 00:08:24.910 Test: test_nvme_rdma_validate_cm_event ...[2024-07-21 11:50:23.699095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:08:24.910 [2024-07-21 11:50:23.699372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 621:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_qpair_init ...passed 00:08:24.910 Test: test_nvme_rdma_qpair_submit_request ...passed 00:08:24.910 Test: test_nvme_rdma_memory_domain ...[2024-07-21 11:50:23.700411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 353:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:08:24.910 passed 00:08:24.910 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:08:24.910 Test: test_rdma_get_memory_translation ...[2024-07-21 11:50:23.701045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1448:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:08:24.910 [2024-07-21 11:50:23.701259] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1459:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:08:24.910 passed 00:08:24.910 Test: test_get_rdma_qpair_from_wc ...passed 00:08:24.910 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:08:24.910 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-21 11:50:23.702027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:24.910 [2024-07-21 11:50:23.702231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3273:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:08:24.910 passed 00:08:24.910 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-21 11:50:23.702912] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:24.910 [2024-07-21 11:50:23.703156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:08:24.910 [2024-07-21 11:50:23.703380] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcb6251ed0 on poll group 0x60c000000040 00:08:24.910 [2024-07-21 11:50:23.703632] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2985:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:08:24.910 [2024-07-21 11:50:23.703871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3031:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:08:24.910 [2024-07-21 11:50:23.704095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 727:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcb6251ed0 on poll group 0x60c000000040 00:08:24.910 [2024-07-21 11:50:23.704352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 705:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:24.910 passed 00:08:24.910 00:08:24.910 Run Summary: Type Total Ran Passed Failed Inactive 00:08:24.910 suites 1 1 n/a 0 0 00:08:24.910 tests 22 22 22 0 0 00:08:24.910 asserts 412 412 412 0 n/a 00:08:24.910 00:08:24.910 Elapsed time = 0.005 seconds 00:08:24.910 00:08:24.910 real 0m0.046s 00:08:24.910 user 0m0.019s 00:08:24.910 sys 0m0.020s 00:08:24.910 11:50:23 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:24.910 11:50:23 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:24.910 ************************************ 00:08:24.910 END TEST unittest_nvme_rdma 00:08:24.910 ************************************ 00:08:24.910 11:50:23 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:24.910 11:50:23 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:24.910 11:50:23 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:24.910 11:50:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:24.910 ************************************ 00:08:24.910 START TEST unittest_nvmf_transport 00:08:24.910 ************************************ 00:08:24.910 11:50:23 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:08:25.168 00:08:25.168 00:08:25.168 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.168 http://cunit.sourceforge.net/ 00:08:25.168 00:08:25.168 00:08:25.168 Suite: nvmf 00:08:25.168 Test: test_spdk_nvmf_transport_create ...[2024-07-21 11:50:23.792930] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:08:25.168 [2024-07-21 11:50:23.793309] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:08:25.168 [2024-07-21 11:50:23.793379] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:08:25.168 [2024-07-21 11:50:23.793528] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:08:25.168 passed 00:08:25.168 Test: test_nvmf_transport_poll_group_create ...passed 00:08:25.168 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-21 11:50:23.793800] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:08:25.168 [2024-07-21 11:50:23.793901] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:08:25.168 [2024-07-21 11:50:23.793938] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:08:25.168 passed 00:08:25.168 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:08:25.168 00:08:25.168 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.168 suites 1 1 n/a 0 0 00:08:25.168 tests 4 4 4 0 0 00:08:25.168 asserts 49 49 49 0 n/a 00:08:25.168 00:08:25.168 Elapsed time = 0.001 seconds 00:08:25.168 00:08:25.168 real 0m0.041s 00:08:25.168 user 0m0.018s 00:08:25.168 sys 0m0.023s 00:08:25.168 11:50:23 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.168 11:50:23 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:08:25.168 ************************************ 00:08:25.168 END TEST unittest_nvmf_transport 00:08:25.168 ************************************ 00:08:25.168 11:50:23 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:25.168 11:50:23 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:25.168 11:50:23 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.168 11:50:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:25.168 ************************************ 00:08:25.168 START TEST unittest_rdma 00:08:25.168 ************************************ 00:08:25.168 11:50:23 unittest.unittest_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:08:25.168 00:08:25.168 00:08:25.168 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.168 http://cunit.sourceforge.net/ 00:08:25.168 00:08:25.168 00:08:25.168 Suite: rdma_common 00:08:25.168 Test: test_spdk_rdma_pd ...[2024-07-21 11:50:23.880269] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:25.168 passed 00:08:25.168 00:08:25.168 [2024-07-21 11:50:23.880607] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:08:25.168 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.168 suites 1 1 n/a 0 0 00:08:25.168 tests 1 1 1 0 0 00:08:25.168 asserts 31 31 31 0 n/a 00:08:25.168 00:08:25.168 Elapsed time = 0.001 seconds 00:08:25.168 00:08:25.168 real 0m0.029s 00:08:25.168 user 0m0.017s 00:08:25.168 sys 0m0.012s 00:08:25.168 11:50:23 unittest.unittest_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.168 11:50:23 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:25.168 ************************************ 00:08:25.168 END TEST unittest_rdma 00:08:25.168 ************************************ 00:08:25.168 11:50:23 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:25.168 11:50:23 unittest -- unit/unittest.sh@258 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:25.168 11:50:23 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:25.168 11:50:23 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.168 11:50:23 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:25.168 ************************************ 00:08:25.168 START TEST unittest_nvme_cuse 00:08:25.168 ************************************ 00:08:25.168 11:50:23 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:08:25.168 00:08:25.168 00:08:25.168 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.168 http://cunit.sourceforge.net/ 00:08:25.168 00:08:25.168 00:08:25.168 Suite: nvme_cuse 00:08:25.168 Test: test_cuse_nvme_submit_io_read_write ...passed 00:08:25.168 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:08:25.168 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:08:25.168 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:08:25.168 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:08:25.168 Test: test_cuse_nvme_submit_io ...[2024-07-21 11:50:23.968365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:08:25.168 passed 00:08:25.168 Test: test_cuse_nvme_reset ...[2024-07-21 11:50:23.968702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:08:25.168 passed 00:08:25.733 Test: test_nvme_cuse_stop ...passed 00:08:25.733 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:08:25.733 00:08:25.733 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.733 suites 1 1 n/a 0 0 00:08:25.733 tests 9 9 9 0 0 00:08:25.733 asserts 118 118 118 0 n/a 00:08:25.733 00:08:25.733 Elapsed time = 0.505 seconds 00:08:25.733 00:08:25.733 real 0m0.539s 00:08:25.733 user 0m0.294s 00:08:25.733 sys 0m0.246s 00:08:25.733 11:50:24 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:25.733 11:50:24 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x 00:08:25.733 ************************************ 00:08:25.733 END TEST unittest_nvme_cuse 00:08:25.733 ************************************ 00:08:25.733 11:50:24 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:08:25.733 11:50:24 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:25.733 11:50:24 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:25.733 11:50:24 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:25.733 ************************************ 00:08:25.733 START TEST unittest_nvmf 00:08:25.733 ************************************ 00:08:25.733 11:50:24 unittest.unittest_nvmf -- common/autotest_common.sh@1121 -- # unittest_nvmf 00:08:25.733 11:50:24 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:08:25.733 00:08:25.733 00:08:25.733 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.733 http://cunit.sourceforge.net/ 00:08:25.733 00:08:25.733 00:08:25.733 Suite: nvmf 00:08:25.733 Test: test_get_log_page ...passed 00:08:25.733 Test: test_process_fabrics_cmd ...[2024-07-21 11:50:24.563928] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2612:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:08:25.733 passed 00:08:25.733 Test: test_connect ...[2024-07-21 11:50:24.564333] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:08:25.733 [2024-07-21 11:50:24.565004] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1006:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:08:25.733 [2024-07-21 11:50:24.565149] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 869:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:08:25.733 [2024-07-21 11:50:24.565197] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1045:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:08:25.733 [2024-07-21 11:50:24.565251] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:08:25.733 [2024-07-21 11:50:24.565376] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 880:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:08:25.733 [2024-07-21 11:50:24.565469] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 887:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:08:25.733 [2024-07-21 11:50:24.565528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 893:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:08:25.733 [2024-07-21 11:50:24.565601] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 920:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:08:25.733 [2024-07-21 11:50:24.565735] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 755:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:08:25.733 [2024-07-21 11:50:24.565876] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 670:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:08:25.733 [2024-07-21 11:50:24.566253] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:08:25.733 [2024-07-21 11:50:24.566376] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:08:25.733 [2024-07-21 11:50:24.566451] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:08:25.733 [2024-07-21 11:50:24.566541] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 713:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:08:25.734 [2024-07-21 11:50:24.566671] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 293:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 00:08:25.734 [2024-07-21 11:50:24.566898] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil)) 00:08:25.734 [2024-07-21 11:50:24.567018] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 800:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:08:25.734 passed 00:08:25.734 Test: test_get_ns_id_desc_list ...passed 00:08:25.734 Test: test_identify_ns ...[2024-07-21 11:50:24.567366] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.734 [2024-07-21 11:50:24.567688] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:08:25.734 [2024-07-21 11:50:24.567842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:08:25.734 passed 00:08:25.734 Test: test_identify_ns_iocs_specific ...[2024-07-21 11:50:24.568031] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.734 [2024-07-21 11:50:24.568360] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:08:25.734 passed 00:08:25.734 Test: test_reservation_write_exclusive ...passed 00:08:25.734 Test: test_reservation_exclusive_access ...passed 00:08:25.734 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:08:25.734 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:08:25.734 Test: test_reservation_notification_log_page ...passed 00:08:25.734 Test: test_get_dif_ctx ...passed 00:08:25.734 Test: test_set_get_features ...[2024-07-21 11:50:24.568993] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:25.734 [2024-07-21 11:50:24.569095] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1642:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:08:25.734 [2024-07-21 11:50:24.569163] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1653:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:08:25.734 [2024-07-21 11:50:24.569228] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1729:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:08:25.734 passed 00:08:25.734 Test: test_identify_ctrlr ...passed 00:08:25.734 Test: test_identify_ctrlr_iocs_specific ...passed 00:08:25.734 Test: test_custom_admin_cmd ...passed 00:08:25.734 Test: test_fused_compare_and_write ...[2024-07-21 11:50:24.569760] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4212:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:08:25.734 [2024-07-21 11:50:24.569835] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4201:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:25.734 [2024-07-21 11:50:24.569897] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4219:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:08:25.734 passed 00:08:25.734 Test: test_multi_async_event_reqs ...passed 00:08:25.734 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:08:25.734 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:08:25.734 Test: test_multi_async_events ...passed 00:08:25.734 Test: test_rae ...passed 00:08:25.734 Test: test_nvmf_ctrlr_create_destruct ...passed 00:08:25.734 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:08:25.734 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:08:25.734 Test: test_zcopy_read ...passed 00:08:25.734 Test: test_zcopy_write ...passed 00:08:25.734 Test: test_nvmf_property_set ...[2024-07-21 11:50:24.570594] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:08:25.734 [2024-07-21 11:50:24.570694] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:08:25.734 passed 00:08:25.734 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-07-21 11:50:24.570893] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:25.734 passed 00:08:25.734 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-07-21 11:50:24.570961] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1940:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:08:25.734 [2024-07-21 11:50:24.571049] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1963:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:08:25.734 passed 00:08:25.734 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:08:25.734 Test: test_nvmf_check_qpair_active ...[2024-07-21 11:50:24.571094] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:08:25.734 [2024-07-21 11:50:24.571165] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1981:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:08:25.734 [2024-07-21 11:50:24.571290] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4677:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:08:25.734 [2024-07-21 11:50:24.571340] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4691:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:08:25.734 [2024-07-21 11:50:24.571379] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:08:25.734 [2024-07-21 11:50:24.571427] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:08:25.734 [2024-07-21 11:50:24.571458] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4703:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:08:25.734 passed 00:08:25.734 00:08:25.734 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.734 suites 1 1 n/a 0 0 00:08:25.734 tests 32 32 32 0 0 00:08:25.734 asserts 977 977 977 0 n/a 00:08:25.734 00:08:25.734 Elapsed time = 0.008 seconds 00:08:25.734 11:50:24 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:08:25.993 00:08:25.993 00:08:25.993 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.993 http://cunit.sourceforge.net/ 00:08:25.993 00:08:25.993 00:08:25.993 Suite: nvmf 00:08:25.993 Test: test_get_rw_params ...passed 00:08:25.993 Test: test_get_rw_ext_params ...passed 00:08:25.993 Test: test_lba_in_range ...passed 00:08:25.993 Test: test_get_dif_ctx ...passed 00:08:25.993 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:08:25.993 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-07-21 11:50:24.602372] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:08:25.993 [2024-07-21 11:50:24.602747] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:08:25.993 [2024-07-21 11:50:24.602866] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 462:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:08:25.993 passed 00:08:25.993 Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-07-21 11:50:24.602936] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:08:25.993 [2024-07-21 11:50:24.603022] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 972:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:08:25.993 passed 00:08:25.993 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-21 11:50:24.603132] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:08:25.993 [2024-07-21 11:50:24.603174] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 408:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:08:25.993 [2024-07-21 11:50:24.603251] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:08:25.993 passed 00:08:25.993 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:08:25.993 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:08:25.993 00:08:25.993 [2024-07-21 11:50:24.603291] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:08:25.993 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.993 suites 1 1 n/a 0 0 00:08:25.993 tests 10 10 10 0 0 00:08:25.993 asserts 159 159 159 0 n/a 00:08:25.993 00:08:25.993 Elapsed time = 0.001 seconds 00:08:25.993 11:50:24 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:08:25.993 00:08:25.993 00:08:25.993 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.993 http://cunit.sourceforge.net/ 00:08:25.993 00:08:25.993 00:08:25.993 Suite: nvmf 00:08:25.993 Test: test_discovery_log ...passed 00:08:25.993 Test: test_discovery_log_with_filters ...passed 00:08:25.993 00:08:25.993 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.993 suites 1 1 n/a 0 0 00:08:25.993 tests 2 2 2 0 0 00:08:25.993 asserts 238 238 238 0 n/a 00:08:25.993 00:08:25.993 Elapsed time = 0.003 seconds 00:08:25.993 11:50:24 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:08:25.993 00:08:25.993 00:08:25.993 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.993 http://cunit.sourceforge.net/ 00:08:25.993 00:08:25.993 00:08:25.993 Suite: nvmf 00:08:25.993 Test: nvmf_test_create_subsystem ...[2024-07-21 11:50:24.675458] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:08:25.993 [2024-07-21 11:50:24.675777] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:08:25.993 [2024-07-21 11:50:24.675955] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:08:25.993 [2024-07-21 11:50:24.676058] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:08:25.993 [2024-07-21 11:50:24.676099] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:08:25.993 [2024-07-21 11:50:24.676351] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:08:25.993 [2024-07-21 11:50:24.676451] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:08:25.993 [2024-07-21 11:50:24.676525] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:08:25.993 [2024-07-21 11:50:24.676882] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:08:25.993 [2024-07-21 11:50:24.676926] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:08:25.993 [2024-07-21 11:50:24.676964] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:08:25.993 [2024-07-21 11:50:24.677010] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:08:25.993 [2024-07-21 11:50:24.677128] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:08:25.993 [2024-07-21 11:50:24.677238] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:08:25.993 [2024-07-21 11:50:24.677352] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:08:25.993 [2024-07-21 11:50:24.677400] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:08:25.993 [2024-07-21 11:50:24.677511] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:08:25.994 [2024-07-21 11:50:24.677557] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:08:25.994 [2024-07-21 11:50:24.677600] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:25.994 [2024-07-21 11:50:24.677655] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:25.994 [2024-07-21 11:50:24.677697] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:08:25.994 passed 00:08:25.994 Test: test_spdk_nvmf_subsystem_add_ns ...[2024-07-21 11:50:24.677741] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:08:25.994 [2024-07-21 11:50:24.677954] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:08:25.994 [2024-07-21 11:50:24.678018] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2010:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:08:25.994 passed 00:08:25.994 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-07-21 11:50:24.678285] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2138:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:08:25.994 passed 00:08:25.994 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:08:25.994 Test: test_spdk_nvmf_ns_visible ...[2024-07-21 11:50:24.678521] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:08:25.994 passed 00:08:25.994 Test: test_reservation_register ...[2024-07-21 11:50:24.678999] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 [2024-07-21 11:50:24.679141] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3135:nvmf_ns_reservation_register: *ERROR*: No registrant 00:08:25.994 passed 00:08:25.994 Test: test_reservation_register_with_ptpl ...passed 00:08:25.994 Test: test_reservation_acquire_preempt_1 ...[2024-07-21 11:50:24.680223] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 passed 00:08:25.994 Test: test_reservation_acquire_release_with_ptpl ...passed 00:08:25.994 Test: test_reservation_release ...[2024-07-21 11:50:24.681836] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 passed 00:08:25.994 Test: test_reservation_unregister_notification ...[2024-07-21 11:50:24.682084] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 passed 00:08:25.994 Test: test_reservation_release_notification ...[2024-07-21 11:50:24.682302] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 passed 00:08:25.994 Test: test_reservation_release_notification_write_exclusive ...[2024-07-21 11:50:24.682551] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 passed 00:08:25.994 Test: test_reservation_clear_notification ...[2024-07-21 11:50:24.682806] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 passed 00:08:25.994 Test: test_reservation_preempt_notification ...[2024-07-21 11:50:24.683055] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3077:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:08:25.994 passed 00:08:25.994 Test: test_spdk_nvmf_ns_event ...passed 00:08:25.994 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:08:25.994 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:08:25.994 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-21 11:50:24.683847] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_ns_reservation_report ...[2024-07-21 11:50:24.683973] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_nqn_is_valid ...[2024-07-21 11:50:24.684139] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3440:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:08:25.994 [2024-07-21 11:50:24.684231] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_ns_reservation_restore ...[2024-07-21 11:50:24.684304] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff206ddd-4ce1-4fe2-ae79-0271bb168ae": uuid is not the correct length 00:08:25.994 [2024-07-21 11:50:24.684343] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_subsystem_state_change ...[2024-07-21 11:50:24.684452] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2634:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_reservation_custom_ops ...passed 00:08:25.994 00:08:25.994 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.994 suites 1 1 n/a 0 0 00:08:25.994 tests 24 24 24 0 0 00:08:25.994 asserts 499 499 499 0 n/a 00:08:25.994 00:08:25.994 Elapsed time = 0.010 seconds 00:08:25.994 11:50:24 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:08:25.994 00:08:25.994 00:08:25.994 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.994 http://cunit.sourceforge.net/ 00:08:25.994 00:08:25.994 00:08:25.994 Suite: nvmf 00:08:25.994 Test: test_nvmf_tcp_create ...[2024-07-21 11:50:24.750547] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_tcp_destroy ...passed 00:08:25.994 Test: test_nvmf_tcp_poll_group_create ...passed 00:08:25.994 Test: test_nvmf_tcp_send_c2h_data ...passed 00:08:25.994 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:08:25.994 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:08:25.994 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:08:25.994 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-21 11:50:24.853914] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:08:25.994 Test: test_nvmf_tcp_icreq_handle ...[2024-07-21 11:50:24.854029] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.854135] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.854188] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.854229] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.854332] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:25.994 [2024-07-21 11:50:24.854429] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.854491] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.854534] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2113:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:08:25.994 [2024-07-21 11:50:24.854608] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.854655] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.854692] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.854728] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.854789] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_tcp_check_xfer_type ...passed 00:08:25.994 Test: test_nvmf_tcp_invalid_sgl ...[2024-07-21 11:50:24.854873] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2508:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:08:25.994 [2024-07-21 11:50:24.854929] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.854967] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb400 is same with the state(5) to be set 00:08:25.994 passed 00:08:25.994 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-21 11:50:24.855022] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2240:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffe71ecc160 00:08:25.994 [2024-07-21 11:50:24.855113] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.855172] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.855216] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2297:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffe71ecb8c0 00:08:25.994 [2024-07-21 11:50:24.855258] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.855295] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.855334] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2250:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:08:25.994 [2024-07-21 11:50:24.855377] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.855432] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.855475] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2289:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:08:25.994 [2024-07-21 11:50:24.855519] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.855565] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.855605] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.855649] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.855713] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.994 [2024-07-21 11:50:24.855751] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.994 [2024-07-21 11:50:24.855806] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.995 [2024-07-21 11:50:24.855843] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.995 [2024-07-21 11:50:24.855900] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.995 [2024-07-21 11:50:24.855939] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.995 [2024-07-21 11:50:24.855998] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.995 [2024-07-21 11:50:24.856037] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:25.995 [2024-07-21 11:50:24.856091] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1086:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:08:25.995 passed 00:08:25.995 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-21 11:50:24.856147] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffe71ecb8c0 is same with the state(5) to be set 00:08:26.253 passed 00:08:26.253 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-07-21 11:50:24.881266] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:08:26.253 passed 00:08:26.253 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-21 11:50:24.881401] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:08:26.253 [2024-07-21 11:50:24.881887] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:08:26.253 [2024-07-21 11:50:24.881974] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:08:26.253 passed 00:08:26.253 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-07-21 11:50:24.882297] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:08:26.253 [2024-07-21 11:50:24.882362] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:08:26.253 passed 00:08:26.253 00:08:26.253 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.253 suites 1 1 n/a 0 0 00:08:26.253 tests 17 17 17 0 0 00:08:26.253 asserts 222 222 222 0 n/a 00:08:26.253 00:08:26.253 Elapsed time = 0.156 seconds 00:08:26.253 11:50:24 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:08:26.253 00:08:26.253 00:08:26.253 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.253 http://cunit.sourceforge.net/ 00:08:26.253 00:08:26.253 00:08:26.253 Suite: nvmf 00:08:26.253 Test: test_nvmf_tgt_create_poll_group ...passed 00:08:26.253 00:08:26.253 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.253 suites 1 1 n/a 0 0 00:08:26.253 tests 1 1 1 0 0 00:08:26.253 asserts 17 17 17 0 n/a 00:08:26.253 00:08:26.253 Elapsed time = 0.024 seconds 00:08:26.253 00:08:26.253 real 0m0.489s 00:08:26.253 user 0m0.197s 00:08:26.253 sys 0m0.291s 00:08:26.253 11:50:25 unittest.unittest_nvmf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.253 ************************************ 00:08:26.253 END TEST unittest_nvmf 00:08:26.253 11:50:25 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:08:26.253 ************************************ 00:08:26.253 11:50:25 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:26.253 11:50:25 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:26.253 11:50:25 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:26.253 11:50:25 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.253 11:50:25 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.253 11:50:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:26.253 ************************************ 00:08:26.253 START TEST unittest_nvmf_rdma 00:08:26.253 ************************************ 00:08:26.253 11:50:25 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:08:26.253 00:08:26.253 00:08:26.253 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.253 http://cunit.sourceforge.net/ 00:08:26.253 00:08:26.253 00:08:26.253 Suite: nvmf 00:08:26.253 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-21 11:50:25.113593] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1858:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:08:26.253 [2024-07-21 11:50:25.114315] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1908:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:08:26.253 [2024-07-21 11:50:25.114385] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1908:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:08:26.253 passed 00:08:26.253 Test: test_spdk_nvmf_rdma_request_process ...passed 00:08:26.253 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:08:26.253 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:08:26.253 Test: test_nvmf_rdma_opts_init ...passed 00:08:26.253 Test: test_nvmf_rdma_request_free_data ...passed 00:08:26.253 Test: test_nvmf_rdma_resources_create ...passed 00:08:26.253 Test: test_nvmf_rdma_qpair_compare ...passed 00:08:26.253 Test: test_nvmf_rdma_resize_cq ...[2024-07-21 11:50:25.118235] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 949:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:08:26.253 Using CQ of insufficient size may lead to CQ overrun 00:08:26.253 [2024-07-21 11:50:25.118382] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 954:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:08:26.253 [2024-07-21 11:50:25.118457] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 962:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:08:26.512 passed 00:08:26.512 00:08:26.512 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.512 suites 1 1 n/a 0 0 00:08:26.512 tests 9 9 9 0 0 00:08:26.512 asserts 579 579 579 0 n/a 00:08:26.512 00:08:26.512 Elapsed time = 0.006 seconds 00:08:26.512 00:08:26.512 real 0m0.046s 00:08:26.512 user 0m0.024s 00:08:26.512 sys 0m0.022s 00:08:26.512 11:50:25 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.512 11:50:25 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:26.512 ************************************ 00:08:26.512 END TEST unittest_nvmf_rdma 00:08:26.512 ************************************ 00:08:26.512 11:50:25 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:26.512 11:50:25 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:08:26.512 11:50:25 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.512 11:50:25 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.512 11:50:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:26.512 ************************************ 00:08:26.512 START TEST unittest_scsi 00:08:26.512 ************************************ 00:08:26.512 11:50:25 unittest.unittest_scsi -- common/autotest_common.sh@1121 -- # unittest_scsi 00:08:26.512 11:50:25 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:08:26.512 00:08:26.512 00:08:26.512 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.512 http://cunit.sourceforge.net/ 00:08:26.512 00:08:26.512 00:08:26.512 Suite: dev_suite 00:08:26.512 Test: dev_destruct_null_dev ...passed 00:08:26.512 Test: dev_destruct_zero_luns ...passed 00:08:26.512 Test: dev_destruct_null_lun ...passed 00:08:26.512 Test: dev_destruct_success ...passed 00:08:26.512 Test: dev_construct_num_luns_zero ...passed 00:08:26.512 Test: dev_construct_no_lun_zero ...[2024-07-21 11:50:25.208578] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:08:26.512 [2024-07-21 11:50:25.208856] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:08:26.512 passed 00:08:26.512 Test: dev_construct_null_lun ...passed 00:08:26.512 Test: dev_construct_name_too_long ...passed 00:08:26.512 Test: dev_construct_success ...[2024-07-21 11:50:25.208904] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:08:26.512 [2024-07-21 11:50:25.208946] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:08:26.512 passed 00:08:26.512 Test: dev_construct_success_lun_zero_not_first ...passed 00:08:26.512 Test: dev_queue_mgmt_task_success ...passed 00:08:26.513 Test: dev_queue_task_success ...passed 00:08:26.513 Test: dev_stop_success ...passed 00:08:26.513 Test: dev_add_port_max_ports ...passed 00:08:26.513 Test: dev_add_port_construct_failure1 ...[2024-07-21 11:50:25.209214] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:08:26.513 [2024-07-21 11:50:25.209306] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:08:26.513 passed 00:08:26.513 Test: dev_add_port_construct_failure2 ...passed 00:08:26.513 Test: dev_add_port_success1 ...passed 00:08:26.513 Test: dev_add_port_success2 ...passed 00:08:26.513 Test: dev_add_port_success3 ...passed 00:08:26.513 Test: dev_find_port_by_id_num_ports_zero ...passed 00:08:26.513 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:08:26.513 Test: dev_find_port_by_id_success ...passed 00:08:26.513 Test: dev_add_lun_bdev_not_found ...passed 00:08:26.513 Test: dev_add_lun_no_free_lun_id ...[2024-07-21 11:50:25.209386] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:08:26.513 passed 00:08:26.513 Test: dev_add_lun_success1 ...[2024-07-21 11:50:25.209703] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:08:26.513 passed 00:08:26.513 Test: dev_add_lun_success2 ...passed 00:08:26.513 Test: dev_check_pending_tasks ...passed 00:08:26.513 Test: dev_iterate_luns ...passed 00:08:26.513 Test: dev_find_free_lun ...passed 00:08:26.513 00:08:26.513 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.513 suites 1 1 n/a 0 0 00:08:26.513 tests 29 29 29 0 0 00:08:26.513 asserts 97 97 97 0 n/a 00:08:26.513 00:08:26.513 Elapsed time = 0.002 seconds 00:08:26.513 11:50:25 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:08:26.513 00:08:26.513 00:08:26.513 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.513 http://cunit.sourceforge.net/ 00:08:26.513 00:08:26.513 00:08:26.513 Suite: lun_suite 00:08:26.513 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:08:26.513 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-21 11:50:25.246002] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:08:26.513 [2024-07-21 11:50:25.246372] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:08:26.513 passed 00:08:26.513 Test: lun_task_mgmt_execute_lun_reset ...passed 00:08:26.513 Test: lun_task_mgmt_execute_target_reset ...passed 00:08:26.513 Test: lun_task_mgmt_execute_invalid_case ...passed 00:08:26.513 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...[2024-07-21 11:50:25.246553] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:08:26.513 passed 00:08:26.513 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:08:26.513 Test: lun_append_task_null_lun_not_supported ...passed 00:08:26.513 Test: lun_execute_scsi_task_pending ...passed 00:08:26.513 Test: lun_execute_scsi_task_complete ...passed 00:08:26.513 Test: lun_execute_scsi_task_resize ...passed 00:08:26.513 Test: lun_destruct_success ...passed 00:08:26.513 Test: lun_construct_null_ctx ...passed 00:08:26.513 Test: lun_construct_success ...passed 00:08:26.513 Test: lun_reset_task_wait_scsi_task_complete ...[2024-07-21 11:50:25.246772] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:08:26.513 passed 00:08:26.513 Test: lun_reset_task_suspend_scsi_task ...passed 00:08:26.513 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:08:26.513 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:08:26.513 00:08:26.513 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.513 suites 1 1 n/a 0 0 00:08:26.513 tests 18 18 18 0 0 00:08:26.513 asserts 153 153 153 0 n/a 00:08:26.513 00:08:26.513 Elapsed time = 0.001 seconds 00:08:26.513 11:50:25 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:08:26.513 00:08:26.513 00:08:26.513 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.513 http://cunit.sourceforge.net/ 00:08:26.513 00:08:26.513 00:08:26.513 Suite: scsi_suite 00:08:26.513 Test: scsi_init ...passed 00:08:26.513 00:08:26.513 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.513 suites 1 1 n/a 0 0 00:08:26.513 tests 1 1 1 0 0 00:08:26.513 asserts 1 1 1 0 n/a 00:08:26.513 00:08:26.513 Elapsed time = 0.000 seconds 00:08:26.513 11:50:25 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:08:26.513 00:08:26.513 00:08:26.513 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.513 http://cunit.sourceforge.net/ 00:08:26.513 00:08:26.513 00:08:26.513 Suite: translation_suite 00:08:26.513 Test: mode_select_6_test ...passed 00:08:26.513 Test: mode_select_6_test2 ...passed 00:08:26.513 Test: mode_sense_6_test ...passed 00:08:26.513 Test: mode_sense_10_test ...passed 00:08:26.513 Test: inquiry_evpd_test ...passed 00:08:26.513 Test: inquiry_standard_test ...passed 00:08:26.513 Test: inquiry_overflow_test ...passed 00:08:26.513 Test: task_complete_test ...passed 00:08:26.513 Test: lba_range_test ...passed 00:08:26.513 Test: xfer_len_test ...[2024-07-21 11:50:25.303676] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:08:26.513 passed 00:08:26.513 Test: xfer_test ...passed 00:08:26.513 Test: scsi_name_padding_test ...passed 00:08:26.513 Test: get_dif_ctx_test ...passed 00:08:26.513 Test: unmap_split_test ...passed 00:08:26.513 00:08:26.513 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.513 suites 1 1 n/a 0 0 00:08:26.513 tests 14 14 14 0 0 00:08:26.513 asserts 1205 1205 1205 0 n/a 00:08:26.513 00:08:26.513 Elapsed time = 0.004 seconds 00:08:26.513 11:50:25 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:08:26.513 00:08:26.513 00:08:26.513 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.513 http://cunit.sourceforge.net/ 00:08:26.513 00:08:26.513 00:08:26.513 Suite: reservation_suite 00:08:26.513 Test: test_reservation_register ...[2024-07-21 11:50:25.338926] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:26.513 passed 00:08:26.513 Test: test_reservation_reserve ...[2024-07-21 11:50:25.339328] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:26.513 [2024-07-21 11:50:25.339411] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:08:26.513 passed 00:08:26.513 Test: test_reservation_preempt_non_all_regs ...[2024-07-21 11:50:25.339517] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:08:26.513 [2024-07-21 11:50:25.339605] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:26.513 [2024-07-21 11:50:25.339689] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:08:26.513 passed 00:08:26.513 Test: test_reservation_preempt_all_regs ...[2024-07-21 11:50:25.339836] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:26.513 passed 00:08:26.513 Test: test_reservation_cmds_conflict ...[2024-07-21 11:50:25.339973] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:26.513 [2024-07-21 11:50:25.340050] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:08:26.513 [2024-07-21 11:50:25.340098] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:26.513 [2024-07-21 11:50:25.340178] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:26.513 [2024-07-21 11:50:25.340240] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:08:26.513 [2024-07-21 11:50:25.340278] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:08:26.513 passed 00:08:26.513 Test: test_scsi2_reserve_release ...passed 00:08:26.513 Test: test_pr_with_scsi2_reserve_release ...passed 00:08:26.513 00:08:26.513 [2024-07-21 11:50:25.340383] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:08:26.513 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.513 suites 1 1 n/a 0 0 00:08:26.513 tests 7 7 7 0 0 00:08:26.513 asserts 257 257 257 0 n/a 00:08:26.513 00:08:26.513 Elapsed time = 0.002 seconds 00:08:26.513 00:08:26.513 real 0m0.165s 00:08:26.513 user 0m0.085s 00:08:26.513 sys 0m0.083s 00:08:26.513 11:50:25 unittest.unittest_scsi -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.513 11:50:25 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:08:26.513 ************************************ 00:08:26.513 END TEST unittest_scsi 00:08:26.513 ************************************ 00:08:26.773 11:50:25 unittest -- unit/unittest.sh@278 -- # uname -s 00:08:26.773 11:50:25 unittest -- unit/unittest.sh@278 -- # '[' Linux = Linux ']' 00:08:26.773 11:50:25 unittest -- unit/unittest.sh@279 -- # run_test unittest_sock unittest_sock 00:08:26.773 11:50:25 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.773 11:50:25 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.773 11:50:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:26.773 ************************************ 00:08:26.773 START TEST unittest_sock 00:08:26.773 ************************************ 00:08:26.773 11:50:25 unittest.unittest_sock -- common/autotest_common.sh@1121 -- # unittest_sock 00:08:26.773 11:50:25 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:08:26.773 00:08:26.773 00:08:26.773 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.773 http://cunit.sourceforge.net/ 00:08:26.773 00:08:26.773 00:08:26.773 Suite: sock 00:08:26.773 Test: posix_sock ...passed 00:08:26.773 Test: ut_sock ...passed 00:08:26.773 Test: posix_sock_group ...passed 00:08:26.773 Test: ut_sock_group ...passed 00:08:26.773 Test: posix_sock_group_fairness ...passed 00:08:26.773 Test: _posix_sock_close ...passed 00:08:26.773 Test: sock_get_default_opts ...passed 00:08:26.773 Test: ut_sock_impl_get_set_opts ...passed 00:08:26.773 Test: posix_sock_impl_get_set_opts ...passed 00:08:26.773 Test: ut_sock_map ...passed 00:08:26.773 Test: override_impl_opts ...passed 00:08:26.773 Test: ut_sock_group_get_ctx ...passed 00:08:26.773 00:08:26.773 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.773 suites 1 1 n/a 0 0 00:08:26.773 tests 12 12 12 0 0 00:08:26.773 asserts 349 349 349 0 n/a 00:08:26.773 00:08:26.773 Elapsed time = 0.008 seconds 00:08:26.773 11:50:25 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:08:26.773 00:08:26.773 00:08:26.773 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.773 http://cunit.sourceforge.net/ 00:08:26.773 00:08:26.773 00:08:26.773 Suite: posix 00:08:26.773 Test: flush ...passed 00:08:26.773 00:08:26.773 Run Summary: Type Total Ran Passed Failed Inactive 00:08:26.773 suites 1 1 n/a 0 0 00:08:26.773 tests 1 1 1 0 0 00:08:26.773 asserts 28 28 28 0 n/a 00:08:26.773 00:08:26.773 Elapsed time = 0.000 seconds 00:08:26.773 11:50:25 unittest.unittest_sock -- unit/unittest.sh@128 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:26.773 00:08:26.773 real 0m0.104s 00:08:26.773 user 0m0.036s 00:08:26.773 sys 0m0.044s 00:08:26.773 11:50:25 unittest.unittest_sock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.773 11:50:25 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x 00:08:26.773 ************************************ 00:08:26.773 END TEST unittest_sock 00:08:26.773 ************************************ 00:08:26.773 11:50:25 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:26.773 11:50:25 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:26.773 11:50:25 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.773 11:50:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:26.773 ************************************ 00:08:26.773 START TEST unittest_thread 00:08:26.773 ************************************ 00:08:26.773 11:50:25 unittest.unittest_thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:08:26.773 00:08:26.773 00:08:26.773 CUnit - A unit testing framework for C - Version 2.1-3 00:08:26.773 http://cunit.sourceforge.net/ 00:08:26.773 00:08:26.773 00:08:26.773 Suite: io_channel 00:08:26.773 Test: thread_alloc ...passed 00:08:26.773 Test: thread_send_msg ...passed 00:08:26.773 Test: thread_poller ...passed 00:08:26.773 Test: poller_pause ...passed 00:08:26.773 Test: thread_for_each ...passed 00:08:26.773 Test: for_each_channel_remove ...passed 00:08:26.773 Test: for_each_channel_unreg ...[2024-07-21 11:50:25.606423] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2173:spdk_io_device_register: *ERROR*: io_device 0x7ffec0ae2790 already registered (old:0x613000000200 new:0x6130000003c0) 00:08:26.773 passed 00:08:26.773 Test: thread_name ...passed 00:08:26.773 Test: channel ...[2024-07-21 11:50:25.611511] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2307:spdk_get_io_channel: *ERROR*: could not find io_device 0x557b73bc7c80 00:08:26.773 passed 00:08:26.773 Test: channel_destroy_races ...passed 00:08:26.773 Test: thread_exit_test ...[2024-07-21 11:50:25.617575] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 635:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:08:26.773 passed 00:08:26.773 Test: thread_update_stats_test ...passed 00:08:26.773 Test: nested_channel ...passed 00:08:26.773 Test: device_unregister_and_thread_exit_race ...passed 00:08:26.773 Test: cache_closest_timed_poller ...passed 00:08:26.773 Test: multi_timed_pollers_have_same_expiration ...passed 00:08:26.773 Test: io_device_lookup ...passed 00:08:26.773 Test: spdk_spin ...[2024-07-21 11:50:25.629351] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3071:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:26.773 [2024-07-21 11:50:25.629701] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffec0ae2780 00:08:26.773 [2024-07-21 11:50:25.630050] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3109:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:08:26.773 [2024-07-21 11:50:25.631974] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:08:26.773 [2024-07-21 11:50:25.632187] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffec0ae2780 00:08:26.773 [2024-07-21 11:50:25.632352] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:26.773 [2024-07-21 11:50:25.632546] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffec0ae2780 00:08:26.773 [2024-07-21 11:50:25.632726] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3092:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:08:26.773 [2024-07-21 11:50:25.632915] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffec0ae2780 00:08:26.773 [2024-07-21 11:50:25.633104] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3053:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:08:26.773 [2024-07-21 11:50:25.633308] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x7ffec0ae2780 00:08:26.773 passed 00:08:26.773 Test: for_each_channel_and_thread_exit_race ...passed 00:08:27.033 Test: for_each_thread_and_thread_exit_race ...passed 00:08:27.033 00:08:27.033 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.033 suites 1 1 n/a 0 0 00:08:27.033 tests 20 20 20 0 0 00:08:27.033 asserts 409 409 409 0 n/a 00:08:27.033 00:08:27.033 Elapsed time = 0.052 seconds 00:08:27.033 00:08:27.033 real 0m0.097s 00:08:27.033 user 0m0.057s 00:08:27.033 sys 0m0.038s 00:08:27.033 11:50:25 unittest.unittest_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.033 11:50:25 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:08:27.033 ************************************ 00:08:27.033 END TEST unittest_thread 00:08:27.033 ************************************ 00:08:27.033 11:50:25 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:27.033 11:50:25 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.033 11:50:25 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.033 11:50:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:27.033 ************************************ 00:08:27.033 START TEST unittest_iobuf 00:08:27.033 ************************************ 00:08:27.033 11:50:25 unittest.unittest_iobuf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:08:27.033 00:08:27.033 00:08:27.033 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.033 http://cunit.sourceforge.net/ 00:08:27.033 00:08:27.033 00:08:27.033 Suite: io_channel 00:08:27.033 Test: iobuf ...passed 00:08:27.033 Test: iobuf_cache ...[2024-07-21 11:50:25.738857] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:27.033 [2024-07-21 11:50:25.739194] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:27.033 [2024-07-21 11:50:25.739353] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 372:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:08:27.033 [2024-07-21 11:50:25.739407] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 375:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:27.033 [2024-07-21 11:50:25.739490] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 360:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:08:27.033 [2024-07-21 11:50:25.739539] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 363:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:08:27.033 passed 00:08:27.033 00:08:27.033 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.033 suites 1 1 n/a 0 0 00:08:27.033 tests 2 2 2 0 0 00:08:27.033 asserts 107 107 107 0 n/a 00:08:27.033 00:08:27.033 Elapsed time = 0.006 seconds 00:08:27.033 00:08:27.033 real 0m0.039s 00:08:27.033 user 0m0.023s 00:08:27.033 sys 0m0.017s 00:08:27.033 11:50:25 unittest.unittest_iobuf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.033 11:50:25 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:08:27.033 ************************************ 00:08:27.033 END TEST unittest_iobuf 00:08:27.033 ************************************ 00:08:27.033 11:50:25 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:08:27.033 11:50:25 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.033 11:50:25 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.033 11:50:25 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:27.033 ************************************ 00:08:27.033 START TEST unittest_util 00:08:27.033 ************************************ 00:08:27.033 11:50:25 unittest.unittest_util -- common/autotest_common.sh@1121 -- # unittest_util 00:08:27.033 11:50:25 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:08:27.033 00:08:27.033 00:08:27.033 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.033 http://cunit.sourceforge.net/ 00:08:27.033 00:08:27.033 00:08:27.033 Suite: base64 00:08:27.033 Test: test_base64_get_encoded_strlen ...passed 00:08:27.033 Test: test_base64_get_decoded_len ...passed 00:08:27.033 Test: test_base64_encode ...passed 00:08:27.033 Test: test_base64_decode ...passed 00:08:27.033 Test: test_base64_urlsafe_encode ...passed 00:08:27.033 Test: test_base64_urlsafe_decode ...passed 00:08:27.033 00:08:27.033 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.033 suites 1 1 n/a 0 0 00:08:27.033 tests 6 6 6 0 0 00:08:27.033 asserts 112 112 112 0 n/a 00:08:27.033 00:08:27.033 Elapsed time = 0.000 seconds 00:08:27.033 11:50:25 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:08:27.033 00:08:27.033 00:08:27.033 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.033 http://cunit.sourceforge.net/ 00:08:27.033 00:08:27.033 00:08:27.033 Suite: bit_array 00:08:27.033 Test: test_1bit ...passed 00:08:27.033 Test: test_64bit ...passed 00:08:27.033 Test: test_find ...passed 00:08:27.033 Test: test_resize ...passed 00:08:27.033 Test: test_errors ...passed 00:08:27.033 Test: test_count ...passed 00:08:27.033 Test: test_mask_store_load ...passed 00:08:27.033 Test: test_mask_clear ...passed 00:08:27.033 00:08:27.033 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.033 suites 1 1 n/a 0 0 00:08:27.033 tests 8 8 8 0 0 00:08:27.033 asserts 5075 5075 5075 0 n/a 00:08:27.033 00:08:27.033 Elapsed time = 0.002 seconds 00:08:27.033 11:50:25 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:08:27.033 00:08:27.033 00:08:27.033 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.033 http://cunit.sourceforge.net/ 00:08:27.033 00:08:27.033 00:08:27.033 Suite: cpuset 00:08:27.033 Test: test_cpuset ...passed 00:08:27.033 Test: test_cpuset_parse ...[2024-07-21 11:50:25.878305] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:08:27.033 [2024-07-21 11:50:25.878730] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:08:27.033 [2024-07-21 11:50:25.878863] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:08:27.033 [2024-07-21 11:50:25.878976] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:08:27.033 [2024-07-21 11:50:25.879039] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:08:27.033 [2024-07-21 11:50:25.879105] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:08:27.033 [2024-07-21 11:50:25.879156] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:08:27.033 [2024-07-21 11:50:25.879228] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:08:27.033 passed 00:08:27.033 Test: test_cpuset_fmt ...passed 00:08:27.033 00:08:27.033 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.033 suites 1 1 n/a 0 0 00:08:27.033 tests 3 3 3 0 0 00:08:27.033 asserts 65 65 65 0 n/a 00:08:27.033 00:08:27.033 Elapsed time = 0.003 seconds 00:08:27.033 11:50:25 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:08:27.304 00:08:27.304 00:08:27.304 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.304 http://cunit.sourceforge.net/ 00:08:27.304 00:08:27.304 00:08:27.304 Suite: crc16 00:08:27.304 Test: test_crc16_t10dif ...passed 00:08:27.304 Test: test_crc16_t10dif_seed ...passed 00:08:27.304 Test: test_crc16_t10dif_copy ...passed 00:08:27.304 00:08:27.304 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.304 suites 1 1 n/a 0 0 00:08:27.304 tests 3 3 3 0 0 00:08:27.304 asserts 5 5 5 0 n/a 00:08:27.304 00:08:27.304 Elapsed time = 0.000 seconds 00:08:27.304 11:50:25 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:08:27.304 00:08:27.304 00:08:27.304 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.304 http://cunit.sourceforge.net/ 00:08:27.305 00:08:27.305 00:08:27.305 Suite: crc32_ieee 00:08:27.305 Test: test_crc32_ieee ...passed 00:08:27.305 00:08:27.305 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.305 suites 1 1 n/a 0 0 00:08:27.305 tests 1 1 1 0 0 00:08:27.305 asserts 1 1 1 0 n/a 00:08:27.305 00:08:27.305 Elapsed time = 0.000 seconds 00:08:27.305 11:50:25 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:08:27.305 00:08:27.305 00:08:27.305 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.305 http://cunit.sourceforge.net/ 00:08:27.305 00:08:27.305 00:08:27.305 Suite: crc32c 00:08:27.305 Test: test_crc32c ...passed 00:08:27.305 Test: test_crc32c_nvme ...passed 00:08:27.305 00:08:27.305 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.305 suites 1 1 n/a 0 0 00:08:27.305 tests 2 2 2 0 0 00:08:27.305 asserts 16 16 16 0 n/a 00:08:27.305 00:08:27.305 Elapsed time = 0.000 seconds 00:08:27.305 11:50:25 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:08:27.305 00:08:27.305 00:08:27.305 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.305 http://cunit.sourceforge.net/ 00:08:27.305 00:08:27.305 00:08:27.305 Suite: crc64 00:08:27.305 Test: test_crc64_nvme ...passed 00:08:27.305 00:08:27.305 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.305 suites 1 1 n/a 0 0 00:08:27.305 tests 1 1 1 0 0 00:08:27.305 asserts 4 4 4 0 n/a 00:08:27.305 00:08:27.305 Elapsed time = 0.000 seconds 00:08:27.305 11:50:26 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:08:27.305 00:08:27.305 00:08:27.305 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.305 http://cunit.sourceforge.net/ 00:08:27.305 00:08:27.305 00:08:27.305 Suite: string 00:08:27.305 Test: test_parse_ip_addr ...passed 00:08:27.305 Test: test_str_chomp ...passed 00:08:27.305 Test: test_parse_capacity ...passed 00:08:27.305 Test: test_sprintf_append_realloc ...passed 00:08:27.305 Test: test_strtol ...passed 00:08:27.305 Test: test_strtoll ...passed 00:08:27.305 Test: test_strarray ...passed 00:08:27.305 Test: test_strcpy_replace ...passed 00:08:27.305 00:08:27.305 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.305 suites 1 1 n/a 0 0 00:08:27.305 tests 8 8 8 0 0 00:08:27.305 asserts 161 161 161 0 n/a 00:08:27.305 00:08:27.305 Elapsed time = 0.000 seconds 00:08:27.305 11:50:26 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:08:27.305 00:08:27.305 00:08:27.305 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.305 http://cunit.sourceforge.net/ 00:08:27.305 00:08:27.305 00:08:27.305 Suite: dif 00:08:27.305 Test: dif_generate_and_verify_test ...[2024-07-21 11:50:26.051925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:27.305 [2024-07-21 11:50:26.052538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:27.305 [2024-07-21 11:50:26.052850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:08:27.305 [2024-07-21 11:50:26.053145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:27.305 [2024-07-21 11:50:26.053482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:27.305 [2024-07-21 11:50:26.053784] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:08:27.305 passed 00:08:27.305 Test: dif_disable_check_test ...[2024-07-21 11:50:26.054832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:27.305 [2024-07-21 11:50:26.055162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:27.305 [2024-07-21 11:50:26.055455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:08:27.305 passed 00:08:27.305 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-21 11:50:26.056541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:08:27.305 [2024-07-21 11:50:26.056860] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:08:27.305 [2024-07-21 11:50:26.057181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:08:27.305 [2024-07-21 11:50:26.057548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:08:27.305 [2024-07-21 11:50:26.057885] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:27.305 [2024-07-21 11:50:26.058206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:27.305 [2024-07-21 11:50:26.058523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:27.305 [2024-07-21 11:50:26.058851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:08:27.305 [2024-07-21 11:50:26.059171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:27.305 [2024-07-21 11:50:26.059516] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:27.305 [2024-07-21 11:50:26.059851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:08:27.305 passed 00:08:27.305 Test: dif_apptag_mask_test ...passed 00:08:27.305 Test: dif_sec_512_md_0_error_test ...passed 00:08:27.305 Test: dif_sec_4096_md_0_error_test ...passed 00:08:27.305 Test: dif_sec_4100_md_128_error_test ...passed 00:08:27.305 Test: dif_guard_seed_test ...passed 00:08:27.305 Test: dif_guard_value_test ...passed 00:08:27.305 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-21 11:50:26.060199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:27.305 [2024-07-21 11:50:26.060557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:08:27.305 [2024-07-21 11:50:26.060880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:27.305 [2024-07-21 11:50:26.060966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:27.305 [2024-07-21 11:50:26.061033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:27.305 [2024-07-21 11:50:26.061101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:27.305 [2024-07-21 11:50:26.061172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:08:27.305 passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:08:27.305 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:27.305 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-21 11:50:26.106988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd6c, Actual=fd4c 00:08:27.305 [2024-07-21 11:50:26.109499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe01, Actual=fe21 00:08:27.305 [2024-07-21 11:50:26.111971] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.305 [2024-07-21 11:50:26.114430] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.305 [2024-07-21 11:50:26.116930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.305 [2024-07-21 11:50:26.119388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.305 [2024-07-21 11:50:26.121845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=e561 00:08:27.305 [2024-07-21 11:50:26.123278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=652d 00:08:27.305 [2024-07-21 11:50:26.124712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753cd, Actual=1ab753ed 00:08:27.305 [2024-07-21 11:50:26.127192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574640, Actual=38574660 00:08:27.305 [2024-07-21 11:50:26.129684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.305 [2024-07-21 11:50:26.132143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.305 [2024-07-21 11:50:26.134594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.305 [2024-07-21 11:50:26.137042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.305 [2024-07-21 11:50:26.139503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=3670cfce 00:08:27.306 [2024-07-21 11:50:26.140941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=15371287 00:08:27.306 [2024-07-21 11:50:26.142408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.306 [2024-07-21 11:50:26.144868] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:08:27.306 [2024-07-21 11:50:26.147316] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.306 [2024-07-21 11:50:26.149779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.306 [2024-07-21 11:50:26.152241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:08:27.306 [2024-07-21 11:50:26.154745] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:08:27.306 [2024-07-21 11:50:26.157240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.306 [2024-07-21 11:50:26.158699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=9b234f1a3ebd14c9 00:08:27.306 passed 00:08:27.306 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-21 11:50:26.159243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:27.306 [2024-07-21 11:50:26.159543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:27.306 [2024-07-21 11:50:26.159851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.160167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.160510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.160820] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.161130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e561 00:08:27.606 [2024-07-21 11:50:26.161363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=652d 00:08:27.606 [2024-07-21 11:50:26.161597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:08:27.606 [2024-07-21 11:50:26.161908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:08:27.606 [2024-07-21 11:50:26.162228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.162536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.162859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.163154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.163464] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3670cfce 00:08:27.606 [2024-07-21 11:50:26.163682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=15371287 00:08:27.606 [2024-07-21 11:50:26.163926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.606 [2024-07-21 11:50:26.164241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:08:27.606 [2024-07-21 11:50:26.164549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.164842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.165154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.606 [2024-07-21 11:50:26.165447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.606 [2024-07-21 11:50:26.165767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.606 [2024-07-21 11:50:26.166017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9b234f1a3ebd14c9 00:08:27.606 passed 00:08:27.606 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-21 11:50:26.166285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:27.606 [2024-07-21 11:50:26.166611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:27.606 [2024-07-21 11:50:26.166920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.167233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.167560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.167874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.168192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e561 00:08:27.606 [2024-07-21 11:50:26.168431] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=652d 00:08:27.606 [2024-07-21 11:50:26.168665] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:08:27.606 [2024-07-21 11:50:26.168997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:08:27.606 [2024-07-21 11:50:26.169313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.169616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.606 [2024-07-21 11:50:26.169921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.170232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.606 [2024-07-21 11:50:26.170549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3670cfce 00:08:27.606 [2024-07-21 11:50:26.170792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=15371287 00:08:27.606 [2024-07-21 11:50:26.171036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.607 [2024-07-21 11:50:26.171343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:08:27.607 [2024-07-21 11:50:26.171646] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.171965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.172288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.172596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.172930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.607 [2024-07-21 11:50:26.173159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9b234f1a3ebd14c9 00:08:27.607 passed 00:08:27.607 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-21 11:50:26.173437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:27.607 [2024-07-21 11:50:26.173767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:27.607 [2024-07-21 11:50:26.174081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.174387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.174742] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.175045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.175356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e561 00:08:27.607 [2024-07-21 11:50:26.175586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=652d 00:08:27.607 [2024-07-21 11:50:26.175808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:08:27.607 [2024-07-21 11:50:26.176116] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:08:27.607 [2024-07-21 11:50:26.176456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.176762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.177071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.177379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.177684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3670cfce 00:08:27.607 [2024-07-21 11:50:26.177920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=15371287 00:08:27.607 [2024-07-21 11:50:26.178158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.607 [2024-07-21 11:50:26.178465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:08:27.607 [2024-07-21 11:50:26.178783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.179099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.179404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.179713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.180048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.607 [2024-07-21 11:50:26.180294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9b234f1a3ebd14c9 00:08:27.607 passed 00:08:27.607 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-21 11:50:26.180571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:27.607 [2024-07-21 11:50:26.180878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:27.607 [2024-07-21 11:50:26.181197] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.181509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.181840] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.182153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.182458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e561 00:08:27.607 [2024-07-21 11:50:26.182693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=652d 00:08:27.607 passed 00:08:27.607 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-21 11:50:26.182958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:08:27.607 [2024-07-21 11:50:26.183270] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:08:27.607 [2024-07-21 11:50:26.183600] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.183909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.184259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.184571] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.184882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3670cfce 00:08:27.607 [2024-07-21 11:50:26.185111] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=15371287 00:08:27.607 [2024-07-21 11:50:26.185394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.607 [2024-07-21 11:50:26.185697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:08:27.607 [2024-07-21 11:50:26.186004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.186314] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.186636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.186952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.187285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.607 [2024-07-21 11:50:26.187512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9b234f1a3ebd14c9 00:08:27.607 passed 00:08:27.607 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-21 11:50:26.187782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:27.607 [2024-07-21 11:50:26.188092] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe01, Actual=fe21 00:08:27.607 [2024-07-21 11:50:26.188411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.188719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.189051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.189359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.189671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e561 00:08:27.607 [2024-07-21 11:50:26.189895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=652d 00:08:27.607 passed 00:08:27.607 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-21 11:50:26.190151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:08:27.607 [2024-07-21 11:50:26.190459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574640, Actual=38574660 00:08:27.607 [2024-07-21 11:50:26.190805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.191118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.191433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.191738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.607 [2024-07-21 11:50:26.192041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3670cfce 00:08:27.607 [2024-07-21 11:50:26.192268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=15371287 00:08:27.607 [2024-07-21 11:50:26.192553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.607 [2024-07-21 11:50:26.192859] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a246, Actual=88010a2d4837a266 00:08:27.607 [2024-07-21 11:50:26.193173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.193472] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.607 [2024-07-21 11:50:26.193783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.194090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.607 [2024-07-21 11:50:26.194426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.607 [2024-07-21 11:50:26.194689] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=9b234f1a3ebd14c9 00:08:27.607 passed 00:08:27.607 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:08:27.607 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:27.607 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:27.607 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:27.607 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:27.607 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:27.607 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:27.607 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:27.608 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:27.608 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-21 11:50:26.239064] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd6c, Actual=fd4c 00:08:27.608 [2024-07-21 11:50:26.240245] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=81cb, Actual=81eb 00:08:27.608 [2024-07-21 11:50:26.241383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.242504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.243642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.244755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.245861] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=e561 00:08:27.608 [2024-07-21 11:50:26.246985] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=4201 00:08:27.608 [2024-07-21 11:50:26.248103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753cd, Actual=1ab753ed 00:08:27.608 [2024-07-21 11:50:26.249230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=b81c1891, Actual=b81c18b1 00:08:27.608 [2024-07-21 11:50:26.250354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.251518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.252645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.253764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.254900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=3670cfce 00:08:27.608 [2024-07-21 11:50:26.256039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=25f035c9 00:08:27.608 [2024-07-21 11:50:26.257164] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.608 [2024-07-21 11:50:26.258308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=501b9cdac204a93a, Actual=501b9cdac204a91a 00:08:27.608 [2024-07-21 11:50:26.259435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.260575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.261706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:08:27.608 [2024-07-21 11:50:26.262846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:08:27.608 [2024-07-21 11:50:26.263960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.608 passed 00:08:27.608 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-21 11:50:26.265143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=cbd9fbcc1f6c8d97 00:08:27.608 [2024-07-21 11:50:26.265499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:27.608 [2024-07-21 11:50:26.265779] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:08:27.608 [2024-07-21 11:50:26.266049] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.266308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.266630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.608 [2024-07-21 11:50:26.266943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.608 [2024-07-21 11:50:26.267216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e561 00:08:27.608 [2024-07-21 11:50:26.267492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=b61a 00:08:27.608 [2024-07-21 11:50:26.267760] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:08:27.608 [2024-07-21 11:50:26.268034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5a2a3913, Actual=5a2a3933 00:08:27.608 [2024-07-21 11:50:26.268338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.268636] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.268906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.608 [2024-07-21 11:50:26.269182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.608 [2024-07-21 11:50:26.269436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3670cfce 00:08:27.608 [2024-07-21 11:50:26.269713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=c7c6144b 00:08:27.608 [2024-07-21 11:50:26.270009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.608 [2024-07-21 11:50:26.270287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b086083afd38a2a0, Actual=b086083afd38a280 00:08:27.608 [2024-07-21 11:50:26.270555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.270825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.271095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.608 [2024-07-21 11:50:26.271365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.608 [2024-07-21 11:50:26.271666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.608 [2024-07-21 11:50:26.271944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2b446f2c2050860d 00:08:27.608 passed 00:08:27.608 Test: dix_sec_512_md_0_error ...passed 00:08:27.608 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-21 11:50:26.272011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:08:27.608 passed 00:08:27.608 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:08:27.608 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:08:27.608 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:08:27.608 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:08:27.608 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:08:27.608 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:08:27.608 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:08:27.608 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:08:27.608 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-21 11:50:26.316053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd6c, Actual=fd4c 00:08:27.608 [2024-07-21 11:50:26.317231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=81cb, Actual=81eb 00:08:27.608 [2024-07-21 11:50:26.318346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.319466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.320631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.321757] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.322876] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=e561 00:08:27.608 [2024-07-21 11:50:26.323984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=4201 00:08:27.608 [2024-07-21 11:50:26.325099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753cd, Actual=1ab753ed 00:08:27.608 [2024-07-21 11:50:26.326198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=b81c1891, Actual=b81c18b1 00:08:27.608 [2024-07-21 11:50:26.327345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.328468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.329578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.330706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=7a 00:08:27.608 [2024-07-21 11:50:26.331819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=3670cfce 00:08:27.608 [2024-07-21 11:50:26.332936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=25f035c9 00:08:27.608 [2024-07-21 11:50:26.334078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.608 [2024-07-21 11:50:26.335196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=501b9cdac204a93a, Actual=501b9cdac204a91a 00:08:27.608 [2024-07-21 11:50:26.336329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.337443] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.338562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:08:27.608 [2024-07-21 11:50:26.339701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=20005a 00:08:27.608 [2024-07-21 11:50:26.340855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.608 passed 00:08:27.608 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-21 11:50:26.341982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=cbd9fbcc1f6c8d97 00:08:27.608 [2024-07-21 11:50:26.342366] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd6c, Actual=fd4c 00:08:27.608 [2024-07-21 11:50:26.342678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=75d0, Actual=75f0 00:08:27.608 [2024-07-21 11:50:26.342964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.343250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.608 [2024-07-21 11:50:26.343545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.608 [2024-07-21 11:50:26.343809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.608 [2024-07-21 11:50:26.344082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=e561 00:08:27.609 [2024-07-21 11:50:26.344364] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=b61a 00:08:27.609 [2024-07-21 11:50:26.344643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753cd, Actual=1ab753ed 00:08:27.609 [2024-07-21 11:50:26.344911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5a2a3913, Actual=5a2a3933 00:08:27.609 [2024-07-21 11:50:26.345198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.609 [2024-07-21 11:50:26.345480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.609 [2024-07-21 11:50:26.345735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.609 [2024-07-21 11:50:26.346006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=78 00:08:27.609 [2024-07-21 11:50:26.346274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=3670cfce 00:08:27.609 [2024-07-21 11:50:26.346540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=c7c6144b 00:08:27.609 [2024-07-21 11:50:26.346851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20f3, Actual=a576a7728ecc20d3 00:08:27.609 [2024-07-21 11:50:26.347122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b086083afd38a2a0, Actual=b086083afd38a280 00:08:27.609 [2024-07-21 11:50:26.347386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.609 [2024-07-21 11:50:26.347663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=a8 00:08:27.609 [2024-07-21 11:50:26.347932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.609 [2024-07-21 11:50:26.348230] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=200058 00:08:27.609 [2024-07-21 11:50:26.348572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=b8d8c0f32a8412ac 00:08:27.609 [2024-07-21 11:50:26.348897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=2b446f2c2050860d 00:08:27.609 passed 00:08:27.609 Test: set_md_interleave_iovs_test ...passed 00:08:27.609 Test: set_md_interleave_iovs_split_test ...passed 00:08:27.609 Test: dif_generate_stream_pi_16_test ...passed 00:08:27.609 Test: dif_generate_stream_test ...passed 00:08:27.609 Test: set_md_interleave_iovs_alignment_test ...[2024-07-21 11:50:26.356319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:08:27.609 passed 00:08:27.609 Test: dif_generate_split_test ...passed 00:08:27.609 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:08:27.609 Test: dif_verify_split_test ...passed 00:08:27.609 Test: dif_verify_stream_multi_segments_test ...passed 00:08:27.609 Test: update_crc32c_pi_16_test ...passed 00:08:27.609 Test: update_crc32c_test ...passed 00:08:27.609 Test: dif_update_crc32c_split_test ...passed 00:08:27.609 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:08:27.609 Test: get_range_with_md_test ...passed 00:08:27.609 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:08:27.609 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:08:27.609 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:27.609 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:08:27.609 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:08:27.609 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:08:27.609 Test: dif_generate_and_verify_unmap_test ...passed 00:08:27.609 00:08:27.609 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.609 suites 1 1 n/a 0 0 00:08:27.609 tests 79 79 79 0 0 00:08:27.609 asserts 3584 3584 3584 0 n/a 00:08:27.609 00:08:27.609 Elapsed time = 0.342 seconds 00:08:27.609 11:50:26 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:08:27.609 00:08:27.609 00:08:27.609 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.609 http://cunit.sourceforge.net/ 00:08:27.609 00:08:27.609 00:08:27.609 Suite: iov 00:08:27.609 Test: test_single_iov ...passed 00:08:27.609 Test: test_simple_iov ...passed 00:08:27.609 Test: test_complex_iov ...passed 00:08:27.609 Test: test_iovs_to_buf ...passed 00:08:27.609 Test: test_buf_to_iovs ...passed 00:08:27.609 Test: test_memset ...passed 00:08:27.609 Test: test_iov_one ...passed 00:08:27.609 Test: test_iov_xfer ...passed 00:08:27.609 00:08:27.609 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.609 suites 1 1 n/a 0 0 00:08:27.609 tests 8 8 8 0 0 00:08:27.609 asserts 156 156 156 0 n/a 00:08:27.609 00:08:27.609 Elapsed time = 0.000 seconds 00:08:27.609 11:50:26 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:08:27.609 00:08:27.609 00:08:27.609 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.609 http://cunit.sourceforge.net/ 00:08:27.609 00:08:27.609 00:08:27.609 Suite: math 00:08:27.609 Test: test_serial_number_arithmetic ...passed 00:08:27.609 Suite: erase 00:08:27.609 Test: test_memset_s ...passed 00:08:27.609 00:08:27.609 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.609 suites 2 2 n/a 0 0 00:08:27.609 tests 2 2 2 0 0 00:08:27.609 asserts 18 18 18 0 n/a 00:08:27.609 00:08:27.609 Elapsed time = 0.000 seconds 00:08:27.868 11:50:26 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:08:27.868 00:08:27.868 00:08:27.868 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.868 http://cunit.sourceforge.net/ 00:08:27.868 00:08:27.868 00:08:27.868 Suite: pipe 00:08:27.868 Test: test_create_destroy ...passed 00:08:27.868 Test: test_write_get_buffer ...passed 00:08:27.868 Test: test_write_advance ...passed 00:08:27.868 Test: test_read_get_buffer ...passed 00:08:27.868 Test: test_read_advance ...passed 00:08:27.868 Test: test_data ...passed 00:08:27.868 00:08:27.868 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.868 suites 1 1 n/a 0 0 00:08:27.868 tests 6 6 6 0 0 00:08:27.868 asserts 251 251 251 0 n/a 00:08:27.868 00:08:27.868 Elapsed time = 0.000 seconds 00:08:27.868 11:50:26 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:08:27.868 00:08:27.868 00:08:27.868 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.868 http://cunit.sourceforge.net/ 00:08:27.868 00:08:27.868 00:08:27.868 Suite: xor 00:08:27.868 Test: test_xor_gen ...passed 00:08:27.868 00:08:27.868 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.868 suites 1 1 n/a 0 0 00:08:27.868 tests 1 1 1 0 0 00:08:27.868 asserts 17 17 17 0 n/a 00:08:27.868 00:08:27.868 Elapsed time = 0.007 seconds 00:08:27.868 00:08:27.868 real 0m0.743s 00:08:27.868 user 0m0.565s 00:08:27.868 sys 0m0.181s 00:08:27.868 11:50:26 unittest.unittest_util -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.868 11:50:26 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:08:27.868 ************************************ 00:08:27.868 END TEST unittest_util 00:08:27.868 ************************************ 00:08:27.868 11:50:26 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:08:27.868 11:50:26 unittest -- unit/unittest.sh@285 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:27.868 11:50:26 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.868 11:50:26 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.868 11:50:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:27.868 ************************************ 00:08:27.868 START TEST unittest_vhost 00:08:27.868 ************************************ 00:08:27.868 11:50:26 unittest.unittest_vhost -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:08:27.868 00:08:27.868 00:08:27.868 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.868 http://cunit.sourceforge.net/ 00:08:27.868 00:08:27.868 00:08:27.868 Suite: vhost_suite 00:08:27.868 Test: desc_to_iov_test ...[2024-07-21 11:50:26.623200] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:08:27.868 passed 00:08:27.868 Test: create_controller_test ...[2024-07-21 11:50:26.628440] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:27.868 [2024-07-21 11:50:26.628725] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:08:27.868 [2024-07-21 11:50:26.629002] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:08:27.868 [2024-07-21 11:50:26.629231] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:08:27.868 [2024-07-21 11:50:26.629469] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:08:27.869 [2024-07-21 11:50:26.630029] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller is too long: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 00:08:27.869 [2024-07-21 11:50:26.631453] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 137:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:08:27.869 passed 00:08:27.869 Test: session_find_by_vid_test ...passed 00:08:27.869 Test: remove_controller_test ...[2024-07-21 11:50:26.633622] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1866:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:08:27.869 passed 00:08:27.869 Test: vq_avail_ring_get_test ...passed 00:08:27.869 Test: vq_packed_ring_test ...passed 00:08:27.869 Test: vhost_blk_construct_test ...passed 00:08:27.869 00:08:27.869 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.869 suites 1 1 n/a 0 0 00:08:27.869 tests 7 7 7 0 0 00:08:27.869 asserts 147 147 147 0 n/a 00:08:27.869 00:08:27.869 Elapsed time = 0.013 seconds 00:08:27.869 00:08:27.869 real 0m0.050s 00:08:27.869 user 0m0.025s 00:08:27.869 sys 0m0.024s 00:08:27.869 11:50:26 unittest.unittest_vhost -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:27.869 11:50:26 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x 00:08:27.869 ************************************ 00:08:27.869 END TEST unittest_vhost 00:08:27.869 ************************************ 00:08:27.869 11:50:26 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:27.869 11:50:26 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:27.869 11:50:26 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.869 11:50:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:27.869 ************************************ 00:08:27.869 START TEST unittest_dma 00:08:27.869 ************************************ 00:08:27.869 11:50:26 unittest.unittest_dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:08:27.869 00:08:27.869 00:08:27.869 CUnit - A unit testing framework for C - Version 2.1-3 00:08:27.869 http://cunit.sourceforge.net/ 00:08:27.869 00:08:27.869 00:08:27.869 Suite: dma_suite 00:08:27.869 Test: test_dma ...[2024-07-21 11:50:26.728623] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:08:27.869 passed 00:08:27.869 00:08:27.869 Run Summary: Type Total Ran Passed Failed Inactive 00:08:27.869 suites 1 1 n/a 0 0 00:08:27.869 tests 1 1 1 0 0 00:08:27.869 asserts 54 54 54 0 n/a 00:08:27.869 00:08:27.869 Elapsed time = 0.001 seconds 00:08:28.127 00:08:28.127 real 0m0.035s 00:08:28.127 user 0m0.020s 00:08:28.127 sys 0m0.015s 00:08:28.127 11:50:26 unittest.unittest_dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.127 11:50:26 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:08:28.127 ************************************ 00:08:28.127 END TEST unittest_dma 00:08:28.127 ************************************ 00:08:28.127 11:50:26 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:08:28.127 11:50:26 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:28.127 11:50:26 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.127 11:50:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:28.127 ************************************ 00:08:28.127 START TEST unittest_init 00:08:28.127 ************************************ 00:08:28.127 11:50:26 unittest.unittest_init -- common/autotest_common.sh@1121 -- # unittest_init 00:08:28.127 11:50:26 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:08:28.127 00:08:28.127 00:08:28.127 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.127 http://cunit.sourceforge.net/ 00:08:28.127 00:08:28.127 00:08:28.127 Suite: subsystem_suite 00:08:28.127 Test: subsystem_sort_test_depends_on_single ...passed 00:08:28.127 Test: subsystem_sort_test_depends_on_multiple ...passed 00:08:28.127 Test: subsystem_sort_test_missing_dependency ...[2024-07-21 11:50:26.823936] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:08:28.127 [2024-07-21 11:50:26.824319] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:08:28.127 passed 00:08:28.127 00:08:28.127 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.127 suites 1 1 n/a 0 0 00:08:28.127 tests 3 3 3 0 0 00:08:28.127 asserts 20 20 20 0 n/a 00:08:28.127 00:08:28.127 Elapsed time = 0.001 seconds 00:08:28.127 00:08:28.127 real 0m0.038s 00:08:28.127 user 0m0.019s 00:08:28.127 sys 0m0.019s 00:08:28.127 11:50:26 unittest.unittest_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.127 11:50:26 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:08:28.127 ************************************ 00:08:28.127 END TEST unittest_init 00:08:28.127 ************************************ 00:08:28.127 11:50:26 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:28.127 11:50:26 unittest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:28.127 11:50:26 unittest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:28.127 11:50:26 unittest -- common/autotest_common.sh@10 -- # set +x 00:08:28.127 ************************************ 00:08:28.127 START TEST unittest_keyring 00:08:28.127 ************************************ 00:08:28.127 11:50:26 unittest.unittest_keyring -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:08:28.127 00:08:28.127 00:08:28.127 CUnit - A unit testing framework for C - Version 2.1-3 00:08:28.127 http://cunit.sourceforge.net/ 00:08:28.127 00:08:28.127 00:08:28.127 Suite: keyring 00:08:28.127 Test: test_keyring_add_remove ...[2024-07-21 11:50:26.907302] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:08:28.127 [2024-07-21 11:50:26.907587] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:08:28.127 passed 00:08:28.127 Test: test_keyring_get_put ...passed 00:08:28.127 00:08:28.127 Run Summary: Type Total Ran Passed Failed Inactive 00:08:28.127 suites 1 1 n/a 0 0 00:08:28.127 tests 2 2 2 0 0 00:08:28.127 asserts 44 44 44 0 n/a 00:08:28.127 00:08:28.127 Elapsed time = 0.001 seconds 00:08:28.127 [2024-07-21 11:50:26.907648] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:08:28.127 00:08:28.127 real 0m0.029s 00:08:28.127 user 0m0.017s 00:08:28.127 sys 0m0.013s 00:08:28.127 11:50:26 unittest.unittest_keyring -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:28.127 ************************************ 00:08:28.127 END TEST unittest_keyring 00:08:28.127 ************************************ 00:08:28.127 11:50:26 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:08:28.127 11:50:26 unittest -- unit/unittest.sh@292 -- # '[' yes = yes ']' 00:08:28.127 11:50:26 unittest -- unit/unittest.sh@292 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:08:28.127 11:50:26 unittest -- unit/unittest.sh@293 -- # hostname 00:08:28.127 11:50:26 unittest -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:08:28.385 geninfo: WARNING: invalid characters removed from testname! 00:09:00.617 11:50:56 unittest -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:09:03.147 11:51:01 unittest -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:06.432 11:51:04 unittest -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:08.959 11:51:07 unittest -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:12.239 11:51:10 unittest -- unit/unittest.sh@298 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:15.516 11:51:13 unittest -- unit/unittest.sh@299 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:18.059 11:51:16 unittest -- unit/unittest.sh@300 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:20.585 11:51:19 unittest -- unit/unittest.sh@301 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:09:20.585 11:51:19 unittest -- unit/unittest.sh@302 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:21.149 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:09:21.149 Found 321 entries. 00:09:21.149 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:09:21.149 Writing .css and .png files. 00:09:21.149 Generating output. 00:09:21.149 Processing file include/linux/virtio_ring.h 00:09:21.406 Processing file include/spdk/nvme_spec.h 00:09:21.406 Processing file include/spdk/endian.h 00:09:21.406 Processing file include/spdk/trace.h 00:09:21.406 Processing file include/spdk/mmio.h 00:09:21.406 Processing file include/spdk/base64.h 00:09:21.406 Processing file include/spdk/histogram_data.h 00:09:21.406 Processing file include/spdk/thread.h 00:09:21.406 Processing file include/spdk/nvmf_transport.h 00:09:21.406 Processing file include/spdk/bdev_module.h 00:09:21.406 Processing file include/spdk/util.h 00:09:21.406 Processing file include/spdk/nvme.h 00:09:21.664 Processing file include/spdk_internal/rdma.h 00:09:21.664 Processing file include/spdk_internal/virtio.h 00:09:21.664 Processing file include/spdk_internal/utf.h 00:09:21.664 Processing file include/spdk_internal/nvme_tcp.h 00:09:21.664 Processing file include/spdk_internal/sock.h 00:09:21.664 Processing file include/spdk_internal/sgl.h 00:09:21.664 Processing file lib/accel/accel_sw.c 00:09:21.664 Processing file lib/accel/accel.c 00:09:21.664 Processing file lib/accel/accel_rpc.c 00:09:21.923 Processing file lib/bdev/bdev_rpc.c 00:09:21.923 Processing file lib/bdev/bdev_zone.c 00:09:21.923 Processing file lib/bdev/scsi_nvme.c 00:09:21.923 Processing file lib/bdev/bdev.c 00:09:21.923 Processing file lib/bdev/part.c 00:09:22.180 Processing file lib/blob/request.c 00:09:22.180 Processing file lib/blob/zeroes.c 00:09:22.180 Processing file lib/blob/blob_bs_dev.c 00:09:22.180 Processing file lib/blob/blobstore.c 00:09:22.180 Processing file lib/blob/blobstore.h 00:09:22.438 Processing file lib/blobfs/blobfs.c 00:09:22.438 Processing file lib/blobfs/tree.c 00:09:22.438 Processing file lib/conf/conf.c 00:09:22.438 Processing file lib/dma/dma.c 00:09:22.698 Processing file lib/env_dpdk/pci_idxd.c 00:09:22.698 Processing file lib/env_dpdk/memory.c 00:09:22.698 Processing file lib/env_dpdk/threads.c 00:09:22.698 Processing file lib/env_dpdk/pci_dpdk.c 00:09:22.698 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:09:22.698 Processing file lib/env_dpdk/pci.c 00:09:22.698 Processing file lib/env_dpdk/pci_event.c 00:09:22.698 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:09:22.698 Processing file lib/env_dpdk/pci_ioat.c 00:09:22.698 Processing file lib/env_dpdk/sigbus_handler.c 00:09:22.698 Processing file lib/env_dpdk/pci_vmd.c 00:09:22.698 Processing file lib/env_dpdk/env.c 00:09:22.698 Processing file lib/env_dpdk/init.c 00:09:22.698 Processing file lib/env_dpdk/pci_virtio.c 00:09:23.000 Processing file lib/event/reactor.c 00:09:23.000 Processing file lib/event/app_rpc.c 00:09:23.000 Processing file lib/event/log_rpc.c 00:09:23.000 Processing file lib/event/app.c 00:09:23.000 Processing file lib/event/scheduler_static.c 00:09:23.276 Processing file lib/ftl/ftl_p2l.c 00:09:23.276 Processing file lib/ftl/ftl_core.c 00:09:23.276 Processing file lib/ftl/ftl_trace.c 00:09:23.276 Processing file lib/ftl/ftl_nv_cache.c 00:09:23.276 Processing file lib/ftl/ftl_nv_cache_io.h 00:09:23.276 Processing file lib/ftl/ftl_rq.c 00:09:23.276 Processing file lib/ftl/ftl_band.h 00:09:23.276 Processing file lib/ftl/ftl_writer.h 00:09:23.276 Processing file lib/ftl/ftl_io.c 00:09:23.276 Processing file lib/ftl/ftl_core.h 00:09:23.276 Processing file lib/ftl/ftl_writer.c 00:09:23.276 Processing file lib/ftl/ftl_l2p_cache.c 00:09:23.276 Processing file lib/ftl/ftl_debug.c 00:09:23.276 Processing file lib/ftl/ftl_layout.c 00:09:23.276 Processing file lib/ftl/ftl_debug.h 00:09:23.276 Processing file lib/ftl/ftl_band_ops.c 00:09:23.276 Processing file lib/ftl/ftl_reloc.c 00:09:23.276 Processing file lib/ftl/ftl_sb.c 00:09:23.276 Processing file lib/ftl/ftl_init.c 00:09:23.276 Processing file lib/ftl/ftl_l2p.c 00:09:23.276 Processing file lib/ftl/ftl_band.c 00:09:23.276 Processing file lib/ftl/ftl_io.h 00:09:23.276 Processing file lib/ftl/ftl_nv_cache.h 00:09:23.276 Processing file lib/ftl/ftl_l2p_flat.c 00:09:23.535 Processing file lib/ftl/base/ftl_base_dev.c 00:09:23.535 Processing file lib/ftl/base/ftl_base_bdev.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:09:23.793 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:09:23.793 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:09:23.793 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_band_upgrade.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:09:23.793 Processing file lib/ftl/upgrade/ftl_trim_upgrade.c 00:09:24.051 Processing file lib/ftl/utils/ftl_mempool.c 00:09:24.051 Processing file lib/ftl/utils/ftl_property.c 00:09:24.051 Processing file lib/ftl/utils/ftl_md.c 00:09:24.051 Processing file lib/ftl/utils/ftl_addr_utils.h 00:09:24.051 Processing file lib/ftl/utils/ftl_df.h 00:09:24.051 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:09:24.051 Processing file lib/ftl/utils/ftl_property.h 00:09:24.051 Processing file lib/ftl/utils/ftl_conf.c 00:09:24.051 Processing file lib/ftl/utils/ftl_bitmap.c 00:09:24.307 Processing file lib/idxd/idxd_user.c 00:09:24.307 Processing file lib/idxd/idxd_internal.h 00:09:24.307 Processing file lib/idxd/idxd.c 00:09:24.307 Processing file lib/init/json_config.c 00:09:24.307 Processing file lib/init/subsystem_rpc.c 00:09:24.307 Processing file lib/init/rpc.c 00:09:24.307 Processing file lib/init/subsystem.c 00:09:24.307 Processing file lib/ioat/ioat.c 00:09:24.307 Processing file lib/ioat/ioat_internal.h 00:09:24.871 Processing file lib/iscsi/tgt_node.c 00:09:24.871 Processing file lib/iscsi/iscsi.h 00:09:24.871 Processing file lib/iscsi/task.h 00:09:24.871 Processing file lib/iscsi/conn.c 00:09:24.871 Processing file lib/iscsi/task.c 00:09:24.871 Processing file lib/iscsi/md5.c 00:09:24.871 Processing file lib/iscsi/portal_grp.c 00:09:24.871 Processing file lib/iscsi/iscsi_rpc.c 00:09:24.871 Processing file lib/iscsi/param.c 00:09:24.871 Processing file lib/iscsi/init_grp.c 00:09:24.871 Processing file lib/iscsi/iscsi.c 00:09:24.871 Processing file lib/iscsi/iscsi_subsystem.c 00:09:24.871 Processing file lib/json/json_parse.c 00:09:24.871 Processing file lib/json/json_write.c 00:09:24.871 Processing file lib/json/json_util.c 00:09:24.871 Processing file lib/jsonrpc/jsonrpc_client.c 00:09:24.871 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:09:24.871 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:09:24.871 Processing file lib/jsonrpc/jsonrpc_server.c 00:09:25.129 Processing file lib/keyring/keyring_rpc.c 00:09:25.129 Processing file lib/keyring/keyring.c 00:09:25.129 Processing file lib/log/log.c 00:09:25.129 Processing file lib/log/log_deprecated.c 00:09:25.129 Processing file lib/log/log_flags.c 00:09:25.129 Processing file lib/lvol/lvol.c 00:09:25.386 Processing file lib/nbd/nbd.c 00:09:25.386 Processing file lib/nbd/nbd_rpc.c 00:09:25.386 Processing file lib/notify/notify.c 00:09:25.386 Processing file lib/notify/notify_rpc.c 00:09:26.320 Processing file lib/nvme/nvme_ns_cmd.c 00:09:26.320 Processing file lib/nvme/nvme_zns.c 00:09:26.320 Processing file lib/nvme/nvme_auth.c 00:09:26.320 Processing file lib/nvme/nvme_cuse.c 00:09:26.320 Processing file lib/nvme/nvme_opal.c 00:09:26.320 Processing file lib/nvme/nvme_poll_group.c 00:09:26.320 Processing file lib/nvme/nvme_qpair.c 00:09:26.320 Processing file lib/nvme/nvme_pcie_common.c 00:09:26.320 Processing file lib/nvme/nvme_transport.c 00:09:26.320 Processing file lib/nvme/nvme_fabric.c 00:09:26.320 Processing file lib/nvme/nvme.c 00:09:26.320 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:09:26.320 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:09:26.320 Processing file lib/nvme/nvme_pcie.c 00:09:26.320 Processing file lib/nvme/nvme_ns.c 00:09:26.320 Processing file lib/nvme/nvme_discovery.c 00:09:26.320 Processing file lib/nvme/nvme_tcp.c 00:09:26.320 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:09:26.320 Processing file lib/nvme/nvme_pcie_internal.h 00:09:26.320 Processing file lib/nvme/nvme_rdma.c 00:09:26.320 Processing file lib/nvme/nvme_ctrlr.c 00:09:26.320 Processing file lib/nvme/nvme_internal.h 00:09:26.320 Processing file lib/nvme/nvme_io_msg.c 00:09:26.320 Processing file lib/nvme/nvme_quirks.c 00:09:26.578 Processing file lib/nvmf/ctrlr.c 00:09:26.578 Processing file lib/nvmf/subsystem.c 00:09:26.578 Processing file lib/nvmf/nvmf_rpc.c 00:09:26.578 Processing file lib/nvmf/rdma.c 00:09:26.578 Processing file lib/nvmf/ctrlr_bdev.c 00:09:26.578 Processing file lib/nvmf/ctrlr_discovery.c 00:09:26.578 Processing file lib/nvmf/nvmf_internal.h 00:09:26.578 Processing file lib/nvmf/nvmf.c 00:09:26.578 Processing file lib/nvmf/transport.c 00:09:26.578 Processing file lib/nvmf/tcp.c 00:09:26.578 Processing file lib/nvmf/auth.c 00:09:26.837 Processing file lib/rdma/rdma_verbs.c 00:09:26.837 Processing file lib/rdma/common.c 00:09:26.837 Processing file lib/rpc/rpc.c 00:09:27.095 Processing file lib/scsi/scsi_bdev.c 00:09:27.095 Processing file lib/scsi/port.c 00:09:27.095 Processing file lib/scsi/scsi_pr.c 00:09:27.095 Processing file lib/scsi/lun.c 00:09:27.095 Processing file lib/scsi/scsi.c 00:09:27.095 Processing file lib/scsi/dev.c 00:09:27.095 Processing file lib/scsi/task.c 00:09:27.095 Processing file lib/scsi/scsi_rpc.c 00:09:27.095 Processing file lib/sock/sock_rpc.c 00:09:27.095 Processing file lib/sock/sock.c 00:09:27.353 Processing file lib/thread/thread.c 00:09:27.353 Processing file lib/thread/iobuf.c 00:09:27.353 Processing file lib/trace/trace.c 00:09:27.353 Processing file lib/trace/trace_rpc.c 00:09:27.353 Processing file lib/trace/trace_flags.c 00:09:27.353 Processing file lib/trace_parser/trace.cpp 00:09:27.353 Processing file lib/ut/ut.c 00:09:27.612 Processing file lib/ut_mock/mock.c 00:09:27.871 Processing file lib/util/string.c 00:09:27.871 Processing file lib/util/strerror_tls.c 00:09:27.871 Processing file lib/util/uuid.c 00:09:27.871 Processing file lib/util/file.c 00:09:27.871 Processing file lib/util/dif.c 00:09:27.871 Processing file lib/util/base64.c 00:09:27.871 Processing file lib/util/crc32.c 00:09:27.871 Processing file lib/util/xor.c 00:09:27.871 Processing file lib/util/pipe.c 00:09:27.871 Processing file lib/util/crc32c.c 00:09:27.871 Processing file lib/util/crc64.c 00:09:27.871 Processing file lib/util/zipf.c 00:09:27.871 Processing file lib/util/fd.c 00:09:27.871 Processing file lib/util/bit_array.c 00:09:27.871 Processing file lib/util/crc32_ieee.c 00:09:27.871 Processing file lib/util/cpuset.c 00:09:27.871 Processing file lib/util/fd_group.c 00:09:27.871 Processing file lib/util/math.c 00:09:27.871 Processing file lib/util/hexlify.c 00:09:27.871 Processing file lib/util/iov.c 00:09:27.871 Processing file lib/util/crc16.c 00:09:27.871 Processing file lib/vfio_user/host/vfio_user.c 00:09:27.871 Processing file lib/vfio_user/host/vfio_user_pci.c 00:09:28.129 Processing file lib/vhost/vhost_scsi.c 00:09:28.129 Processing file lib/vhost/vhost_rpc.c 00:09:28.129 Processing file lib/vhost/rte_vhost_user.c 00:09:28.129 Processing file lib/vhost/vhost_blk.c 00:09:28.129 Processing file lib/vhost/vhost_internal.h 00:09:28.129 Processing file lib/vhost/vhost.c 00:09:28.387 Processing file lib/virtio/virtio.c 00:09:28.387 Processing file lib/virtio/virtio_vhost_user.c 00:09:28.387 Processing file lib/virtio/virtio_pci.c 00:09:28.387 Processing file lib/virtio/virtio_vfio_user.c 00:09:28.387 Processing file lib/vmd/led.c 00:09:28.387 Processing file lib/vmd/vmd.c 00:09:28.387 Processing file module/accel/dsa/accel_dsa.c 00:09:28.387 Processing file module/accel/dsa/accel_dsa_rpc.c 00:09:28.644 Processing file module/accel/error/accel_error.c 00:09:28.644 Processing file module/accel/error/accel_error_rpc.c 00:09:28.644 Processing file module/accel/iaa/accel_iaa_rpc.c 00:09:28.644 Processing file module/accel/iaa/accel_iaa.c 00:09:28.644 Processing file module/accel/ioat/accel_ioat.c 00:09:28.644 Processing file module/accel/ioat/accel_ioat_rpc.c 00:09:28.644 Processing file module/bdev/aio/bdev_aio.c 00:09:28.644 Processing file module/bdev/aio/bdev_aio_rpc.c 00:09:28.902 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:09:28.902 Processing file module/bdev/delay/vbdev_delay.c 00:09:28.902 Processing file module/bdev/error/vbdev_error.c 00:09:28.902 Processing file module/bdev/error/vbdev_error_rpc.c 00:09:28.902 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:09:28.902 Processing file module/bdev/ftl/bdev_ftl.c 00:09:29.160 Processing file module/bdev/gpt/gpt.h 00:09:29.160 Processing file module/bdev/gpt/gpt.c 00:09:29.160 Processing file module/bdev/gpt/vbdev_gpt.c 00:09:29.160 Processing file module/bdev/iscsi/bdev_iscsi.c 00:09:29.160 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:09:29.434 Processing file module/bdev/lvol/vbdev_lvol.c 00:09:29.434 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:09:29.434 Processing file module/bdev/malloc/bdev_malloc.c 00:09:29.434 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:09:29.434 Processing file module/bdev/null/bdev_null.c 00:09:29.434 Processing file module/bdev/null/bdev_null_rpc.c 00:09:29.699 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:09:29.699 Processing file module/bdev/nvme/bdev_nvme.c 00:09:29.699 Processing file module/bdev/nvme/vbdev_opal.c 00:09:29.699 Processing file module/bdev/nvme/bdev_mdns_client.c 00:09:29.699 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:09:29.699 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:09:29.699 Processing file module/bdev/nvme/nvme_rpc.c 00:09:29.957 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:09:29.957 Processing file module/bdev/passthru/vbdev_passthru.c 00:09:30.214 Processing file module/bdev/raid/raid0.c 00:09:30.214 Processing file module/bdev/raid/concat.c 00:09:30.214 Processing file module/bdev/raid/raid5f.c 00:09:30.214 Processing file module/bdev/raid/bdev_raid_sb.c 00:09:30.214 Processing file module/bdev/raid/raid1.c 00:09:30.214 Processing file module/bdev/raid/bdev_raid.c 00:09:30.214 Processing file module/bdev/raid/bdev_raid.h 00:09:30.214 Processing file module/bdev/raid/bdev_raid_rpc.c 00:09:30.214 Processing file module/bdev/split/vbdev_split_rpc.c 00:09:30.214 Processing file module/bdev/split/vbdev_split.c 00:09:30.214 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:09:30.215 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:09:30.215 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:09:30.471 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:09:30.471 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:09:30.471 Processing file module/blob/bdev/blob_bdev.c 00:09:30.471 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:09:30.471 Processing file module/blobfs/bdev/blobfs_bdev.c 00:09:30.729 Processing file module/env_dpdk/env_dpdk_rpc.c 00:09:30.729 Processing file module/event/subsystems/accel/accel.c 00:09:30.729 Processing file module/event/subsystems/bdev/bdev.c 00:09:30.729 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:09:30.729 Processing file module/event/subsystems/iobuf/iobuf.c 00:09:30.986 Processing file module/event/subsystems/iscsi/iscsi.c 00:09:30.987 Processing file module/event/subsystems/keyring/keyring.c 00:09:30.987 Processing file module/event/subsystems/nbd/nbd.c 00:09:30.987 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:09:30.987 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:09:31.244 Processing file module/event/subsystems/scheduler/scheduler.c 00:09:31.244 Processing file module/event/subsystems/scsi/scsi.c 00:09:31.244 Processing file module/event/subsystems/sock/sock.c 00:09:31.244 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:09:31.501 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:09:31.502 Processing file module/event/subsystems/vmd/vmd.c 00:09:31.502 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:09:31.502 Processing file module/keyring/file/keyring.c 00:09:31.502 Processing file module/keyring/file/keyring_rpc.c 00:09:31.502 Processing file module/keyring/linux/keyring.c 00:09:31.502 Processing file module/keyring/linux/keyring_rpc.c 00:09:31.759 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:09:31.759 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:09:31.759 Processing file module/scheduler/gscheduler/gscheduler.c 00:09:31.759 Processing file module/sock/sock_kernel.h 00:09:32.017 Processing file module/sock/posix/posix.c 00:09:32.017 Writing directory view page. 00:09:32.017 Overall coverage rate: 00:09:32.017 lines......: 38.7% (40806 of 105395 lines) 00:09:32.017 functions..: 42.4% (3713 of 8766 functions) 00:09:32.017 00:09:32.017 00:09:32.017 ===================== 00:09:32.017 All unit tests passed 00:09:32.017 ===================== 00:09:32.017 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:09:32.017 11:51:30 unittest -- unit/unittest.sh@305 -- # set +x 00:09:32.017 00:09:32.017 00:09:32.017 00:09:32.017 real 3m53.566s 00:09:32.017 user 3m23.527s 00:09:32.017 sys 0m19.920s 00:09:32.017 11:51:30 unittest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.017 11:51:30 unittest -- common/autotest_common.sh@10 -- # set +x 00:09:32.017 ************************************ 00:09:32.017 END TEST unittest 00:09:32.017 ************************************ 00:09:32.017 11:51:30 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:32.017 11:51:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:32.017 11:51:30 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:32.017 11:51:30 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:32.017 11:51:30 -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:32.017 11:51:30 -- common/autotest_common.sh@10 -- # set +x 00:09:32.017 11:51:30 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:32.017 11:51:30 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:32.017 11:51:30 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:32.017 11:51:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.017 11:51:30 -- common/autotest_common.sh@10 -- # set +x 00:09:32.017 ************************************ 00:09:32.017 START TEST env 00:09:32.017 ************************************ 00:09:32.017 11:51:30 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:32.017 * Looking for test storage... 00:09:32.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:32.017 11:51:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:32.017 11:51:30 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:32.017 11:51:30 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.017 11:51:30 env -- common/autotest_common.sh@10 -- # set +x 00:09:32.017 ************************************ 00:09:32.017 START TEST env_memory 00:09:32.017 ************************************ 00:09:32.017 11:51:30 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:32.017 00:09:32.017 00:09:32.017 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.017 http://cunit.sourceforge.net/ 00:09:32.017 00:09:32.017 00:09:32.017 Suite: memory 00:09:32.275 Test: alloc and free memory map ...[2024-07-21 11:51:30.925650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:32.275 passed 00:09:32.275 Test: mem map translation ...[2024-07-21 11:51:30.974740] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:32.275 [2024-07-21 11:51:30.974861] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:32.275 [2024-07-21 11:51:30.975021] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:32.275 [2024-07-21 11:51:30.975106] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:32.275 passed 00:09:32.275 Test: mem map registration ...[2024-07-21 11:51:31.061282] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:32.275 [2024-07-21 11:51:31.061400] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:32.275 passed 00:09:32.532 Test: mem map adjacent registrations ...passed 00:09:32.532 00:09:32.532 Run Summary: Type Total Ran Passed Failed Inactive 00:09:32.532 suites 1 1 n/a 0 0 00:09:32.532 tests 4 4 4 0 0 00:09:32.532 asserts 152 152 152 0 n/a 00:09:32.532 00:09:32.532 Elapsed time = 0.297 seconds 00:09:32.532 00:09:32.532 real 0m0.331s 00:09:32.532 user 0m0.302s 00:09:32.532 sys 0m0.029s 00:09:32.532 11:51:31 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.532 ************************************ 00:09:32.532 11:51:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:32.532 END TEST env_memory 00:09:32.532 ************************************ 00:09:32.532 11:51:31 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:32.532 11:51:31 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:32.532 11:51:31 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.532 11:51:31 env -- common/autotest_common.sh@10 -- # set +x 00:09:32.532 ************************************ 00:09:32.532 START TEST env_vtophys 00:09:32.532 ************************************ 00:09:32.532 11:51:31 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:32.533 EAL: lib.eal log level changed from notice to debug 00:09:32.533 EAL: Detected lcore 0 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 1 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 2 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 3 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 4 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 5 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 6 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 7 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 8 as core 0 on socket 0 00:09:32.533 EAL: Detected lcore 9 as core 0 on socket 0 00:09:32.533 EAL: Maximum logical cores by configuration: 128 00:09:32.533 EAL: Detected CPU lcores: 10 00:09:32.533 EAL: Detected NUMA nodes: 1 00:09:32.533 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:09:32.533 EAL: Checking presence of .so 'librte_eal.so.24' 00:09:32.533 EAL: Checking presence of .so 'librte_eal.so' 00:09:32.533 EAL: Detected static linkage of DPDK 00:09:32.533 EAL: No shared files mode enabled, IPC will be disabled 00:09:32.533 EAL: Selected IOVA mode 'PA' 00:09:32.533 EAL: Probing VFIO support... 00:09:32.533 EAL: IOMMU type 1 (Type 1) is supported 00:09:32.533 EAL: IOMMU type 7 (sPAPR) is not supported 00:09:32.533 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:09:32.533 EAL: VFIO support initialized 00:09:32.533 EAL: Ask a virtual area of 0x2e000 bytes 00:09:32.533 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:32.533 EAL: Setting up physically contiguous memory... 00:09:32.533 EAL: Setting maximum number of open files to 1048576 00:09:32.533 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:32.533 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:32.533 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.533 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:32.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.533 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.533 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:32.533 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:32.533 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.533 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:32.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.533 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.533 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:32.533 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:32.533 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.533 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:32.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.533 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.533 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:32.533 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:32.533 EAL: Ask a virtual area of 0x61000 bytes 00:09:32.533 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:32.533 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:32.533 EAL: Ask a virtual area of 0x400000000 bytes 00:09:32.533 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:32.533 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:32.533 EAL: Hugepages will be freed exactly as allocated. 00:09:32.533 EAL: No shared files mode enabled, IPC is disabled 00:09:32.533 EAL: No shared files mode enabled, IPC is disabled 00:09:32.790 EAL: TSC frequency is ~2200000 KHz 00:09:32.790 EAL: Main lcore 0 is ready (tid=7fdc82a19a80;cpuset=[0]) 00:09:32.790 EAL: Trying to obtain current memory policy. 00:09:32.790 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:32.790 EAL: Restoring previous memory policy: 0 00:09:32.790 EAL: request: mp_malloc_sync 00:09:32.790 EAL: No shared files mode enabled, IPC is disabled 00:09:32.790 EAL: Heap on socket 0 was expanded by 2MB 00:09:32.790 EAL: No shared files mode enabled, IPC is disabled 00:09:32.790 EAL: Mem event callback 'spdk:(nil)' registered 00:09:32.790 00:09:32.790 00:09:32.790 CUnit - A unit testing framework for C - Version 2.1-3 00:09:32.790 http://cunit.sourceforge.net/ 00:09:32.790 00:09:32.790 00:09:32.790 Suite: components_suite 00:09:33.048 Test: vtophys_malloc_test ...passed 00:09:33.048 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:33.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.048 EAL: Restoring previous memory policy: 0 00:09:33.048 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.048 EAL: request: mp_malloc_sync 00:09:33.048 EAL: No shared files mode enabled, IPC is disabled 00:09:33.048 EAL: Heap on socket 0 was expanded by 4MB 00:09:33.048 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.048 EAL: request: mp_malloc_sync 00:09:33.048 EAL: No shared files mode enabled, IPC is disabled 00:09:33.048 EAL: Heap on socket 0 was shrunk by 4MB 00:09:33.048 EAL: Trying to obtain current memory policy. 00:09:33.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.048 EAL: Restoring previous memory policy: 0 00:09:33.048 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was expanded by 6MB 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was shrunk by 6MB 00:09:33.049 EAL: Trying to obtain current memory policy. 00:09:33.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.049 EAL: Restoring previous memory policy: 0 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was expanded by 10MB 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was shrunk by 10MB 00:09:33.049 EAL: Trying to obtain current memory policy. 00:09:33.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.049 EAL: Restoring previous memory policy: 0 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was expanded by 18MB 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was shrunk by 18MB 00:09:33.049 EAL: Trying to obtain current memory policy. 00:09:33.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.049 EAL: Restoring previous memory policy: 0 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was expanded by 34MB 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was shrunk by 34MB 00:09:33.049 EAL: Trying to obtain current memory policy. 00:09:33.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.049 EAL: Restoring previous memory policy: 0 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was expanded by 66MB 00:09:33.049 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.049 EAL: request: mp_malloc_sync 00:09:33.049 EAL: No shared files mode enabled, IPC is disabled 00:09:33.049 EAL: Heap on socket 0 was shrunk by 66MB 00:09:33.049 EAL: Trying to obtain current memory policy. 00:09:33.049 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.307 EAL: Restoring previous memory policy: 0 00:09:33.307 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.307 EAL: request: mp_malloc_sync 00:09:33.307 EAL: No shared files mode enabled, IPC is disabled 00:09:33.307 EAL: Heap on socket 0 was expanded by 130MB 00:09:33.307 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.307 EAL: request: mp_malloc_sync 00:09:33.307 EAL: No shared files mode enabled, IPC is disabled 00:09:33.307 EAL: Heap on socket 0 was shrunk by 130MB 00:09:33.307 EAL: Trying to obtain current memory policy. 00:09:33.307 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.307 EAL: Restoring previous memory policy: 0 00:09:33.307 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.307 EAL: request: mp_malloc_sync 00:09:33.307 EAL: No shared files mode enabled, IPC is disabled 00:09:33.307 EAL: Heap on socket 0 was expanded by 258MB 00:09:33.307 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.307 EAL: request: mp_malloc_sync 00:09:33.307 EAL: No shared files mode enabled, IPC is disabled 00:09:33.307 EAL: Heap on socket 0 was shrunk by 258MB 00:09:33.307 EAL: Trying to obtain current memory policy. 00:09:33.307 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:33.565 EAL: Restoring previous memory policy: 0 00:09:33.565 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.565 EAL: request: mp_malloc_sync 00:09:33.565 EAL: No shared files mode enabled, IPC is disabled 00:09:33.565 EAL: Heap on socket 0 was expanded by 514MB 00:09:33.565 EAL: Calling mem event callback 'spdk:(nil)' 00:09:33.823 EAL: request: mp_malloc_sync 00:09:33.823 EAL: No shared files mode enabled, IPC is disabled 00:09:33.823 EAL: Heap on socket 0 was shrunk by 514MB 00:09:33.823 EAL: Trying to obtain current memory policy. 00:09:33.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:34.080 EAL: Restoring previous memory policy: 0 00:09:34.080 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.080 EAL: request: mp_malloc_sync 00:09:34.080 EAL: No shared files mode enabled, IPC is disabled 00:09:34.080 EAL: Heap on socket 0 was expanded by 1026MB 00:09:34.346 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.346 EAL: request: mp_malloc_sync 00:09:34.346 EAL: No shared files mode enabled, IPC is disabled 00:09:34.346 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:34.346 passed 00:09:34.346 00:09:34.346 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.346 suites 1 1 n/a 0 0 00:09:34.346 tests 2 2 2 0 0 00:09:34.346 asserts 6331 6331 6331 0 n/a 00:09:34.346 00:09:34.346 Elapsed time = 1.659 seconds 00:09:34.346 EAL: Calling mem event callback 'spdk:(nil)' 00:09:34.346 EAL: request: mp_malloc_sync 00:09:34.346 EAL: No shared files mode enabled, IPC is disabled 00:09:34.346 EAL: Heap on socket 0 was shrunk by 2MB 00:09:34.346 EAL: No shared files mode enabled, IPC is disabled 00:09:34.346 EAL: No shared files mode enabled, IPC is disabled 00:09:34.346 EAL: No shared files mode enabled, IPC is disabled 00:09:34.346 00:09:34.346 real 0m1.951s 00:09:34.346 user 0m0.978s 00:09:34.346 sys 0m0.825s 00:09:34.346 11:51:33 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.346 11:51:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:34.346 ************************************ 00:09:34.346 END TEST env_vtophys 00:09:34.346 ************************************ 00:09:34.603 11:51:33 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:34.603 11:51:33 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.603 11:51:33 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.603 11:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:09:34.603 ************************************ 00:09:34.603 START TEST env_pci 00:09:34.603 ************************************ 00:09:34.603 11:51:33 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:34.603 00:09:34.603 00:09:34.603 CUnit - A unit testing framework for C - Version 2.1-3 00:09:34.603 http://cunit.sourceforge.net/ 00:09:34.603 00:09:34.603 00:09:34.603 Suite: pci 00:09:34.603 Test: pci_hook ...[2024-07-21 11:51:33.275911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 123311 has claimed it 00:09:34.603 EAL: Cannot find device (10000:00:01.0) 00:09:34.603 EAL: Failed to attach device on primary process 00:09:34.603 passed 00:09:34.603 00:09:34.603 Run Summary: Type Total Ran Passed Failed Inactive 00:09:34.603 suites 1 1 n/a 0 0 00:09:34.603 tests 1 1 1 0 0 00:09:34.603 asserts 25 25 25 0 n/a 00:09:34.603 00:09:34.603 Elapsed time = 0.004 seconds 00:09:34.603 00:09:34.603 real 0m0.066s 00:09:34.603 user 0m0.034s 00:09:34.603 sys 0m0.032s 00:09:34.603 11:51:33 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.603 11:51:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:34.603 ************************************ 00:09:34.603 END TEST env_pci 00:09:34.603 ************************************ 00:09:34.603 11:51:33 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:34.603 11:51:33 env -- env/env.sh@15 -- # uname 00:09:34.603 11:51:33 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:34.603 11:51:33 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:34.603 11:51:33 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:34.603 11:51:33 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:34.603 11:51:33 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.603 11:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:09:34.603 ************************************ 00:09:34.603 START TEST env_dpdk_post_init 00:09:34.603 ************************************ 00:09:34.603 11:51:33 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:34.603 EAL: Detected CPU lcores: 10 00:09:34.603 EAL: Detected NUMA nodes: 1 00:09:34.603 EAL: Detected static linkage of DPDK 00:09:34.603 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:34.603 EAL: Selected IOVA mode 'PA' 00:09:34.603 EAL: VFIO support initialized 00:09:34.860 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:34.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:34.860 Starting DPDK initialization... 00:09:34.860 Starting SPDK post initialization... 00:09:34.860 SPDK NVMe probe 00:09:34.860 Attaching to 0000:00:10.0 00:09:34.860 Attached to 0000:00:10.0 00:09:34.860 Cleaning up... 00:09:34.860 00:09:34.860 real 0m0.251s 00:09:34.860 user 0m0.070s 00:09:34.860 sys 0m0.083s 00:09:34.860 11:51:33 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:34.860 11:51:33 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:34.860 ************************************ 00:09:34.860 END TEST env_dpdk_post_init 00:09:34.860 ************************************ 00:09:34.860 11:51:33 env -- env/env.sh@26 -- # uname 00:09:34.860 11:51:33 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:34.860 11:51:33 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:34.860 11:51:33 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:34.860 11:51:33 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:34.860 11:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:09:34.860 ************************************ 00:09:34.860 START TEST env_mem_callbacks 00:09:34.860 ************************************ 00:09:34.860 11:51:33 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:35.117 EAL: Detected CPU lcores: 10 00:09:35.117 EAL: Detected NUMA nodes: 1 00:09:35.117 EAL: Detected static linkage of DPDK 00:09:35.117 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:35.117 EAL: Selected IOVA mode 'PA' 00:09:35.117 EAL: VFIO support initialized 00:09:35.117 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:35.117 00:09:35.117 00:09:35.117 CUnit - A unit testing framework for C - Version 2.1-3 00:09:35.117 http://cunit.sourceforge.net/ 00:09:35.117 00:09:35.117 00:09:35.117 Suite: memory 00:09:35.117 Test: test ... 00:09:35.117 register 0x200000200000 2097152 00:09:35.117 malloc 3145728 00:09:35.117 register 0x200000400000 4194304 00:09:35.117 buf 0x200000500000 len 3145728 PASSED 00:09:35.117 malloc 64 00:09:35.117 buf 0x2000004fff40 len 64 PASSED 00:09:35.117 malloc 4194304 00:09:35.117 register 0x200000800000 6291456 00:09:35.117 buf 0x200000a00000 len 4194304 PASSED 00:09:35.117 free 0x200000500000 3145728 00:09:35.117 free 0x2000004fff40 64 00:09:35.117 unregister 0x200000400000 4194304 PASSED 00:09:35.117 free 0x200000a00000 4194304 00:09:35.117 unregister 0x200000800000 6291456 PASSED 00:09:35.117 malloc 8388608 00:09:35.117 register 0x200000400000 10485760 00:09:35.117 buf 0x200000600000 len 8388608 PASSED 00:09:35.117 free 0x200000600000 8388608 00:09:35.117 unregister 0x200000400000 10485760 PASSED 00:09:35.117 passed 00:09:35.117 00:09:35.117 Run Summary: Type Total Ran Passed Failed Inactive 00:09:35.117 suites 1 1 n/a 0 0 00:09:35.117 tests 1 1 1 0 0 00:09:35.117 asserts 15 15 15 0 n/a 00:09:35.117 00:09:35.117 Elapsed time = 0.007 seconds 00:09:35.117 00:09:35.117 real 0m0.221s 00:09:35.117 user 0m0.050s 00:09:35.117 sys 0m0.068s 00:09:35.117 11:51:33 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.117 11:51:33 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:35.117 ************************************ 00:09:35.117 END TEST env_mem_callbacks 00:09:35.117 ************************************ 00:09:35.117 ************************************ 00:09:35.117 END TEST env 00:09:35.117 ************************************ 00:09:35.117 00:09:35.117 real 0m3.184s 00:09:35.117 user 0m1.629s 00:09:35.117 sys 0m1.197s 00:09:35.117 11:51:33 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.117 11:51:33 env -- common/autotest_common.sh@10 -- # set +x 00:09:35.373 11:51:33 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:35.373 11:51:33 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:35.373 11:51:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:35.373 11:51:33 -- common/autotest_common.sh@10 -- # set +x 00:09:35.373 ************************************ 00:09:35.373 START TEST rpc 00:09:35.373 ************************************ 00:09:35.373 11:51:33 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:35.373 * Looking for test storage... 00:09:35.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:35.373 11:51:34 rpc -- rpc/rpc.sh@65 -- # spdk_pid=123442 00:09:35.373 11:51:34 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:35.373 11:51:34 rpc -- rpc/rpc.sh@67 -- # waitforlisten 123442 00:09:35.373 11:51:34 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:35.373 11:51:34 rpc -- common/autotest_common.sh@827 -- # '[' -z 123442 ']' 00:09:35.373 11:51:34 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.373 11:51:34 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:35.373 11:51:34 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.373 11:51:34 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:35.373 11:51:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.373 [2024-07-21 11:51:34.168941] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:35.373 [2024-07-21 11:51:34.169203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123442 ] 00:09:35.630 [2024-07-21 11:51:34.336522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.630 [2024-07-21 11:51:34.407883] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:35.630 [2024-07-21 11:51:34.408309] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 123442' to capture a snapshot of events at runtime. 00:09:35.630 [2024-07-21 11:51:34.408447] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.630 [2024-07-21 11:51:34.408625] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.630 [2024-07-21 11:51:34.408713] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid123442 for offline analysis/debug. 00:09:35.630 [2024-07-21 11:51:34.408900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.562 11:51:35 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:36.562 11:51:35 rpc -- common/autotest_common.sh@860 -- # return 0 00:09:36.562 11:51:35 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:36.562 11:51:35 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:36.562 11:51:35 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:36.562 11:51:35 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:36.562 11:51:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.562 11:51:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.562 11:51:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.562 ************************************ 00:09:36.562 START TEST rpc_integrity 00:09:36.562 ************************************ 00:09:36.562 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:09:36.562 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:36.562 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.562 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.562 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.562 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:36.562 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:36.562 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:36.562 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:36.562 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.562 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.562 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.562 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:36.562 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:36.563 { 00:09:36.563 "name": "Malloc0", 00:09:36.563 "aliases": [ 00:09:36.563 "d510646f-dbde-4d2c-a02e-20972159aa0e" 00:09:36.563 ], 00:09:36.563 "product_name": "Malloc disk", 00:09:36.563 "block_size": 512, 00:09:36.563 "num_blocks": 16384, 00:09:36.563 "uuid": "d510646f-dbde-4d2c-a02e-20972159aa0e", 00:09:36.563 "assigned_rate_limits": { 00:09:36.563 "rw_ios_per_sec": 0, 00:09:36.563 "rw_mbytes_per_sec": 0, 00:09:36.563 "r_mbytes_per_sec": 0, 00:09:36.563 "w_mbytes_per_sec": 0 00:09:36.563 }, 00:09:36.563 "claimed": false, 00:09:36.563 "zoned": false, 00:09:36.563 "supported_io_types": { 00:09:36.563 "read": true, 00:09:36.563 "write": true, 00:09:36.563 "unmap": true, 00:09:36.563 "write_zeroes": true, 00:09:36.563 "flush": true, 00:09:36.563 "reset": true, 00:09:36.563 "compare": false, 00:09:36.563 "compare_and_write": false, 00:09:36.563 "abort": true, 00:09:36.563 "nvme_admin": false, 00:09:36.563 "nvme_io": false 00:09:36.563 }, 00:09:36.563 "memory_domains": [ 00:09:36.563 { 00:09:36.563 "dma_device_id": "system", 00:09:36.563 "dma_device_type": 1 00:09:36.563 }, 00:09:36.563 { 00:09:36.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.563 "dma_device_type": 2 00:09:36.563 } 00:09:36.563 ], 00:09:36.563 "driver_specific": {} 00:09:36.563 } 00:09:36.563 ]' 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 [2024-07-21 11:51:35.289836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:36.563 [2024-07-21 11:51:35.290099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:36.563 [2024-07-21 11:51:35.290255] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006f80 00:09:36.563 [2024-07-21 11:51:35.290399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:36.563 [2024-07-21 11:51:35.293274] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:36.563 [2024-07-21 11:51:35.293477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:36.563 Passthru0 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:36.563 { 00:09:36.563 "name": "Malloc0", 00:09:36.563 "aliases": [ 00:09:36.563 "d510646f-dbde-4d2c-a02e-20972159aa0e" 00:09:36.563 ], 00:09:36.563 "product_name": "Malloc disk", 00:09:36.563 "block_size": 512, 00:09:36.563 "num_blocks": 16384, 00:09:36.563 "uuid": "d510646f-dbde-4d2c-a02e-20972159aa0e", 00:09:36.563 "assigned_rate_limits": { 00:09:36.563 "rw_ios_per_sec": 0, 00:09:36.563 "rw_mbytes_per_sec": 0, 00:09:36.563 "r_mbytes_per_sec": 0, 00:09:36.563 "w_mbytes_per_sec": 0 00:09:36.563 }, 00:09:36.563 "claimed": true, 00:09:36.563 "claim_type": "exclusive_write", 00:09:36.563 "zoned": false, 00:09:36.563 "supported_io_types": { 00:09:36.563 "read": true, 00:09:36.563 "write": true, 00:09:36.563 "unmap": true, 00:09:36.563 "write_zeroes": true, 00:09:36.563 "flush": true, 00:09:36.563 "reset": true, 00:09:36.563 "compare": false, 00:09:36.563 "compare_and_write": false, 00:09:36.563 "abort": true, 00:09:36.563 "nvme_admin": false, 00:09:36.563 "nvme_io": false 00:09:36.563 }, 00:09:36.563 "memory_domains": [ 00:09:36.563 { 00:09:36.563 "dma_device_id": "system", 00:09:36.563 "dma_device_type": 1 00:09:36.563 }, 00:09:36.563 { 00:09:36.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.563 "dma_device_type": 2 00:09:36.563 } 00:09:36.563 ], 00:09:36.563 "driver_specific": {} 00:09:36.563 }, 00:09:36.563 { 00:09:36.563 "name": "Passthru0", 00:09:36.563 "aliases": [ 00:09:36.563 "029be9f3-0e0f-542c-9506-e385075d99ea" 00:09:36.563 ], 00:09:36.563 "product_name": "passthru", 00:09:36.563 "block_size": 512, 00:09:36.563 "num_blocks": 16384, 00:09:36.563 "uuid": "029be9f3-0e0f-542c-9506-e385075d99ea", 00:09:36.563 "assigned_rate_limits": { 00:09:36.563 "rw_ios_per_sec": 0, 00:09:36.563 "rw_mbytes_per_sec": 0, 00:09:36.563 "r_mbytes_per_sec": 0, 00:09:36.563 "w_mbytes_per_sec": 0 00:09:36.563 }, 00:09:36.563 "claimed": false, 00:09:36.563 "zoned": false, 00:09:36.563 "supported_io_types": { 00:09:36.563 "read": true, 00:09:36.563 "write": true, 00:09:36.563 "unmap": true, 00:09:36.563 "write_zeroes": true, 00:09:36.563 "flush": true, 00:09:36.563 "reset": true, 00:09:36.563 "compare": false, 00:09:36.563 "compare_and_write": false, 00:09:36.563 "abort": true, 00:09:36.563 "nvme_admin": false, 00:09:36.563 "nvme_io": false 00:09:36.563 }, 00:09:36.563 "memory_domains": [ 00:09:36.563 { 00:09:36.563 "dma_device_id": "system", 00:09:36.563 "dma_device_type": 1 00:09:36.563 }, 00:09:36.563 { 00:09:36.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.563 "dma_device_type": 2 00:09:36.563 } 00:09:36.563 ], 00:09:36.563 "driver_specific": { 00:09:36.563 "passthru": { 00:09:36.563 "name": "Passthru0", 00:09:36.563 "base_bdev_name": "Malloc0" 00:09:36.563 } 00:09:36.563 } 00:09:36.563 } 00:09:36.563 ]' 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.563 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:36.563 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:36.820 11:51:35 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:36.820 00:09:36.820 real 0m0.313s 00:09:36.820 user 0m0.202s 00:09:36.820 sys 0m0.041s 00:09:36.820 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.820 11:51:35 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 ************************************ 00:09:36.820 END TEST rpc_integrity 00:09:36.820 ************************************ 00:09:36.820 11:51:35 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:36.820 11:51:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:36.820 11:51:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:36.820 11:51:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 ************************************ 00:09:36.820 START TEST rpc_plugins 00:09:36.820 ************************************ 00:09:36.820 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:09:36.820 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:36.820 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.820 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.820 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:36.820 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:36.820 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.820 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:36.820 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.820 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:36.820 { 00:09:36.820 "name": "Malloc1", 00:09:36.820 "aliases": [ 00:09:36.820 "9054fa62-18ee-4571-967d-997a83f2ba22" 00:09:36.820 ], 00:09:36.820 "product_name": "Malloc disk", 00:09:36.820 "block_size": 4096, 00:09:36.820 "num_blocks": 256, 00:09:36.820 "uuid": "9054fa62-18ee-4571-967d-997a83f2ba22", 00:09:36.820 "assigned_rate_limits": { 00:09:36.820 "rw_ios_per_sec": 0, 00:09:36.820 "rw_mbytes_per_sec": 0, 00:09:36.820 "r_mbytes_per_sec": 0, 00:09:36.820 "w_mbytes_per_sec": 0 00:09:36.820 }, 00:09:36.820 "claimed": false, 00:09:36.820 "zoned": false, 00:09:36.820 "supported_io_types": { 00:09:36.820 "read": true, 00:09:36.821 "write": true, 00:09:36.821 "unmap": true, 00:09:36.821 "write_zeroes": true, 00:09:36.821 "flush": true, 00:09:36.821 "reset": true, 00:09:36.821 "compare": false, 00:09:36.821 "compare_and_write": false, 00:09:36.821 "abort": true, 00:09:36.821 "nvme_admin": false, 00:09:36.821 "nvme_io": false 00:09:36.821 }, 00:09:36.821 "memory_domains": [ 00:09:36.821 { 00:09:36.821 "dma_device_id": "system", 00:09:36.821 "dma_device_type": 1 00:09:36.821 }, 00:09:36.821 { 00:09:36.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.821 "dma_device_type": 2 00:09:36.821 } 00:09:36.821 ], 00:09:36.821 "driver_specific": {} 00:09:36.821 } 00:09:36.821 ]' 00:09:36.821 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:36.821 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:36.821 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.821 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:36.821 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:36.821 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:36.821 11:51:35 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:36.821 00:09:36.821 real 0m0.149s 00:09:36.821 user 0m0.089s 00:09:36.821 sys 0m0.025s 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:36.821 11:51:35 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:36.821 ************************************ 00:09:36.821 END TEST rpc_plugins 00:09:36.821 ************************************ 00:09:37.078 11:51:35 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:37.078 11:51:35 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:37.078 11:51:35 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.078 11:51:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.078 ************************************ 00:09:37.078 START TEST rpc_trace_cmd_test 00:09:37.078 ************************************ 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:37.078 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid123442", 00:09:37.078 "tpoint_group_mask": "0x8", 00:09:37.078 "iscsi_conn": { 00:09:37.078 "mask": "0x2", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "scsi": { 00:09:37.078 "mask": "0x4", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "bdev": { 00:09:37.078 "mask": "0x8", 00:09:37.078 "tpoint_mask": "0xffffffffffffffff" 00:09:37.078 }, 00:09:37.078 "nvmf_rdma": { 00:09:37.078 "mask": "0x10", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "nvmf_tcp": { 00:09:37.078 "mask": "0x20", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "ftl": { 00:09:37.078 "mask": "0x40", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "blobfs": { 00:09:37.078 "mask": "0x80", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "dsa": { 00:09:37.078 "mask": "0x200", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "thread": { 00:09:37.078 "mask": "0x400", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "nvme_pcie": { 00:09:37.078 "mask": "0x800", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "iaa": { 00:09:37.078 "mask": "0x1000", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "nvme_tcp": { 00:09:37.078 "mask": "0x2000", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "bdev_nvme": { 00:09:37.078 "mask": "0x4000", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 }, 00:09:37.078 "sock": { 00:09:37.078 "mask": "0x8000", 00:09:37.078 "tpoint_mask": "0x0" 00:09:37.078 } 00:09:37.078 }' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:37.078 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:37.335 11:51:35 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:37.335 00:09:37.335 real 0m0.270s 00:09:37.335 user 0m0.245s 00:09:37.335 sys 0m0.018s 00:09:37.335 11:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:37.335 ************************************ 00:09:37.335 END TEST rpc_trace_cmd_test 00:09:37.335 ************************************ 00:09:37.335 11:51:35 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.335 11:51:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:37.335 11:51:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:37.335 11:51:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:37.335 11:51:36 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:37.335 11:51:36 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:37.335 11:51:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:37.335 ************************************ 00:09:37.335 START TEST rpc_daemon_integrity 00:09:37.335 ************************************ 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.335 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:37.335 { 00:09:37.335 "name": "Malloc2", 00:09:37.335 "aliases": [ 00:09:37.335 "36ba78d0-3ea1-4348-bef6-37bb87639ac2" 00:09:37.335 ], 00:09:37.335 "product_name": "Malloc disk", 00:09:37.335 "block_size": 512, 00:09:37.335 "num_blocks": 16384, 00:09:37.335 "uuid": "36ba78d0-3ea1-4348-bef6-37bb87639ac2", 00:09:37.335 "assigned_rate_limits": { 00:09:37.335 "rw_ios_per_sec": 0, 00:09:37.335 "rw_mbytes_per_sec": 0, 00:09:37.335 "r_mbytes_per_sec": 0, 00:09:37.336 "w_mbytes_per_sec": 0 00:09:37.336 }, 00:09:37.336 "claimed": false, 00:09:37.336 "zoned": false, 00:09:37.336 "supported_io_types": { 00:09:37.336 "read": true, 00:09:37.336 "write": true, 00:09:37.336 "unmap": true, 00:09:37.336 "write_zeroes": true, 00:09:37.336 "flush": true, 00:09:37.336 "reset": true, 00:09:37.336 "compare": false, 00:09:37.336 "compare_and_write": false, 00:09:37.336 "abort": true, 00:09:37.336 "nvme_admin": false, 00:09:37.336 "nvme_io": false 00:09:37.336 }, 00:09:37.336 "memory_domains": [ 00:09:37.336 { 00:09:37.336 "dma_device_id": "system", 00:09:37.336 "dma_device_type": 1 00:09:37.336 }, 00:09:37.336 { 00:09:37.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.336 "dma_device_type": 2 00:09:37.336 } 00:09:37.336 ], 00:09:37.336 "driver_specific": {} 00:09:37.336 } 00:09:37.336 ]' 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 [2024-07-21 11:51:36.163850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:37.336 [2024-07-21 11:51:36.164055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.336 [2024-07-21 11:51:36.164186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:09:37.336 [2024-07-21 11:51:36.164339] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.336 [2024-07-21 11:51:36.166837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.336 [2024-07-21 11:51:36.167033] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:37.336 Passthru0 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:37.336 { 00:09:37.336 "name": "Malloc2", 00:09:37.336 "aliases": [ 00:09:37.336 "36ba78d0-3ea1-4348-bef6-37bb87639ac2" 00:09:37.336 ], 00:09:37.336 "product_name": "Malloc disk", 00:09:37.336 "block_size": 512, 00:09:37.336 "num_blocks": 16384, 00:09:37.336 "uuid": "36ba78d0-3ea1-4348-bef6-37bb87639ac2", 00:09:37.336 "assigned_rate_limits": { 00:09:37.336 "rw_ios_per_sec": 0, 00:09:37.336 "rw_mbytes_per_sec": 0, 00:09:37.336 "r_mbytes_per_sec": 0, 00:09:37.336 "w_mbytes_per_sec": 0 00:09:37.336 }, 00:09:37.336 "claimed": true, 00:09:37.336 "claim_type": "exclusive_write", 00:09:37.336 "zoned": false, 00:09:37.336 "supported_io_types": { 00:09:37.336 "read": true, 00:09:37.336 "write": true, 00:09:37.336 "unmap": true, 00:09:37.336 "write_zeroes": true, 00:09:37.336 "flush": true, 00:09:37.336 "reset": true, 00:09:37.336 "compare": false, 00:09:37.336 "compare_and_write": false, 00:09:37.336 "abort": true, 00:09:37.336 "nvme_admin": false, 00:09:37.336 "nvme_io": false 00:09:37.336 }, 00:09:37.336 "memory_domains": [ 00:09:37.336 { 00:09:37.336 "dma_device_id": "system", 00:09:37.336 "dma_device_type": 1 00:09:37.336 }, 00:09:37.336 { 00:09:37.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.336 "dma_device_type": 2 00:09:37.336 } 00:09:37.336 ], 00:09:37.336 "driver_specific": {} 00:09:37.336 }, 00:09:37.336 { 00:09:37.336 "name": "Passthru0", 00:09:37.336 "aliases": [ 00:09:37.336 "ce9cd68e-9fa0-5a8e-9544-8b914b256208" 00:09:37.336 ], 00:09:37.336 "product_name": "passthru", 00:09:37.336 "block_size": 512, 00:09:37.336 "num_blocks": 16384, 00:09:37.336 "uuid": "ce9cd68e-9fa0-5a8e-9544-8b914b256208", 00:09:37.336 "assigned_rate_limits": { 00:09:37.336 "rw_ios_per_sec": 0, 00:09:37.336 "rw_mbytes_per_sec": 0, 00:09:37.336 "r_mbytes_per_sec": 0, 00:09:37.336 "w_mbytes_per_sec": 0 00:09:37.336 }, 00:09:37.336 "claimed": false, 00:09:37.336 "zoned": false, 00:09:37.336 "supported_io_types": { 00:09:37.336 "read": true, 00:09:37.336 "write": true, 00:09:37.336 "unmap": true, 00:09:37.336 "write_zeroes": true, 00:09:37.336 "flush": true, 00:09:37.336 "reset": true, 00:09:37.336 "compare": false, 00:09:37.336 "compare_and_write": false, 00:09:37.336 "abort": true, 00:09:37.336 "nvme_admin": false, 00:09:37.336 "nvme_io": false 00:09:37.336 }, 00:09:37.336 "memory_domains": [ 00:09:37.336 { 00:09:37.336 "dma_device_id": "system", 00:09:37.336 "dma_device_type": 1 00:09:37.336 }, 00:09:37.336 { 00:09:37.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.336 "dma_device_type": 2 00:09:37.336 } 00:09:37.336 ], 00:09:37.336 "driver_specific": { 00:09:37.336 "passthru": { 00:09:37.336 "name": "Passthru0", 00:09:37.336 "base_bdev_name": "Malloc2" 00:09:37.336 } 00:09:37.336 } 00:09:37.336 } 00:09:37.336 ]' 00:09:37.336 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:37.593 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:37.594 00:09:37.594 real 0m0.298s 00:09:37.594 user 0m0.218s 00:09:37.594 sys 0m0.018s 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:37.594 11:51:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:37.594 ************************************ 00:09:37.594 END TEST rpc_daemon_integrity 00:09:37.594 ************************************ 00:09:37.594 11:51:36 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:37.594 11:51:36 rpc -- rpc/rpc.sh@84 -- # killprocess 123442 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@946 -- # '[' -z 123442 ']' 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@950 -- # kill -0 123442 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@951 -- # uname 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123442 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123442' 00:09:37.594 killing process with pid 123442 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@965 -- # kill 123442 00:09:37.594 11:51:36 rpc -- common/autotest_common.sh@970 -- # wait 123442 00:09:38.159 00:09:38.159 real 0m2.825s 00:09:38.159 user 0m3.669s 00:09:38.159 sys 0m0.630s 00:09:38.159 11:51:36 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.159 11:51:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.159 ************************************ 00:09:38.159 END TEST rpc 00:09:38.159 ************************************ 00:09:38.159 11:51:36 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:38.159 11:51:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:38.159 11:51:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.159 11:51:36 -- common/autotest_common.sh@10 -- # set +x 00:09:38.159 ************************************ 00:09:38.159 START TEST skip_rpc 00:09:38.159 ************************************ 00:09:38.159 11:51:36 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:38.159 * Looking for test storage... 00:09:38.159 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:38.159 11:51:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:38.159 11:51:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:38.159 11:51:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:38.159 11:51:36 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:38.159 11:51:36 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.159 11:51:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:38.159 ************************************ 00:09:38.159 START TEST skip_rpc 00:09:38.159 ************************************ 00:09:38.159 11:51:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:09:38.159 11:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=123653 00:09:38.159 11:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:38.159 11:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:38.159 11:51:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:38.430 [2024-07-21 11:51:37.041710] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:38.430 [2024-07-21 11:51:37.041976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123653 ] 00:09:38.430 [2024-07-21 11:51:37.210372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.430 [2024-07-21 11:51:37.277637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 123653 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 123653 ']' 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 123653 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123653 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:43.692 killing process with pid 123653 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123653' 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 123653 00:09:43.692 11:51:41 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 123653 00:09:43.692 00:09:43.692 real 0m5.454s 00:09:43.692 user 0m4.991s 00:09:43.692 sys 0m0.375s 00:09:43.692 11:51:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:43.692 11:51:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 ************************************ 00:09:43.692 END TEST skip_rpc 00:09:43.692 ************************************ 00:09:43.692 11:51:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:43.692 11:51:42 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:43.692 11:51:42 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:43.692 11:51:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 ************************************ 00:09:43.692 START TEST skip_rpc_with_json 00:09:43.692 ************************************ 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=123756 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 123756 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 123756 ']' 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:43.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:43.692 11:51:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 [2024-07-21 11:51:42.554264] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:43.692 [2024-07-21 11:51:42.554501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123756 ] 00:09:43.950 [2024-07-21 11:51:42.722438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.950 [2024-07-21 11:51:42.803476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.882 [2024-07-21 11:51:43.515637] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:44.882 request: 00:09:44.882 { 00:09:44.882 "trtype": "tcp", 00:09:44.882 "method": "nvmf_get_transports", 00:09:44.882 "req_id": 1 00:09:44.882 } 00:09:44.882 Got JSON-RPC error response 00:09:44.882 response: 00:09:44.882 { 00:09:44.882 "code": -19, 00:09:44.882 "message": "No such device" 00:09:44.882 } 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.882 [2024-07-21 11:51:43.523749] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.882 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:44.883 { 00:09:44.883 "subsystems": [ 00:09:44.883 { 00:09:44.883 "subsystem": "scheduler", 00:09:44.883 "config": [ 00:09:44.883 { 00:09:44.883 "method": "framework_set_scheduler", 00:09:44.883 "params": { 00:09:44.883 "name": "static" 00:09:44.883 } 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "vmd", 00:09:44.883 "config": [] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "sock", 00:09:44.883 "config": [ 00:09:44.883 { 00:09:44.883 "method": "sock_set_default_impl", 00:09:44.883 "params": { 00:09:44.883 "impl_name": "posix" 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "sock_impl_set_options", 00:09:44.883 "params": { 00:09:44.883 "impl_name": "ssl", 00:09:44.883 "recv_buf_size": 4096, 00:09:44.883 "send_buf_size": 4096, 00:09:44.883 "enable_recv_pipe": true, 00:09:44.883 "enable_quickack": false, 00:09:44.883 "enable_placement_id": 0, 00:09:44.883 "enable_zerocopy_send_server": true, 00:09:44.883 "enable_zerocopy_send_client": false, 00:09:44.883 "zerocopy_threshold": 0, 00:09:44.883 "tls_version": 0, 00:09:44.883 "enable_ktls": false 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "sock_impl_set_options", 00:09:44.883 "params": { 00:09:44.883 "impl_name": "posix", 00:09:44.883 "recv_buf_size": 2097152, 00:09:44.883 "send_buf_size": 2097152, 00:09:44.883 "enable_recv_pipe": true, 00:09:44.883 "enable_quickack": false, 00:09:44.883 "enable_placement_id": 0, 00:09:44.883 "enable_zerocopy_send_server": true, 00:09:44.883 "enable_zerocopy_send_client": false, 00:09:44.883 "zerocopy_threshold": 0, 00:09:44.883 "tls_version": 0, 00:09:44.883 "enable_ktls": false 00:09:44.883 } 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "iobuf", 00:09:44.883 "config": [ 00:09:44.883 { 00:09:44.883 "method": "iobuf_set_options", 00:09:44.883 "params": { 00:09:44.883 "small_pool_count": 8192, 00:09:44.883 "large_pool_count": 1024, 00:09:44.883 "small_bufsize": 8192, 00:09:44.883 "large_bufsize": 135168 00:09:44.883 } 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "keyring", 00:09:44.883 "config": [] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "accel", 00:09:44.883 "config": [ 00:09:44.883 { 00:09:44.883 "method": "accel_set_options", 00:09:44.883 "params": { 00:09:44.883 "small_cache_size": 128, 00:09:44.883 "large_cache_size": 16, 00:09:44.883 "task_count": 2048, 00:09:44.883 "sequence_count": 2048, 00:09:44.883 "buf_count": 2048 00:09:44.883 } 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "bdev", 00:09:44.883 "config": [ 00:09:44.883 { 00:09:44.883 "method": "bdev_set_options", 00:09:44.883 "params": { 00:09:44.883 "bdev_io_pool_size": 65535, 00:09:44.883 "bdev_io_cache_size": 256, 00:09:44.883 "bdev_auto_examine": true, 00:09:44.883 "iobuf_small_cache_size": 128, 00:09:44.883 "iobuf_large_cache_size": 16 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "bdev_raid_set_options", 00:09:44.883 "params": { 00:09:44.883 "process_window_size_kb": 1024 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "bdev_nvme_set_options", 00:09:44.883 "params": { 00:09:44.883 "action_on_timeout": "none", 00:09:44.883 "timeout_us": 0, 00:09:44.883 "timeout_admin_us": 0, 00:09:44.883 "keep_alive_timeout_ms": 10000, 00:09:44.883 "arbitration_burst": 0, 00:09:44.883 "low_priority_weight": 0, 00:09:44.883 "medium_priority_weight": 0, 00:09:44.883 "high_priority_weight": 0, 00:09:44.883 "nvme_adminq_poll_period_us": 10000, 00:09:44.883 "nvme_ioq_poll_period_us": 0, 00:09:44.883 "io_queue_requests": 0, 00:09:44.883 "delay_cmd_submit": true, 00:09:44.883 "transport_retry_count": 4, 00:09:44.883 "bdev_retry_count": 3, 00:09:44.883 "transport_ack_timeout": 0, 00:09:44.883 "ctrlr_loss_timeout_sec": 0, 00:09:44.883 "reconnect_delay_sec": 0, 00:09:44.883 "fast_io_fail_timeout_sec": 0, 00:09:44.883 "disable_auto_failback": false, 00:09:44.883 "generate_uuids": false, 00:09:44.883 "transport_tos": 0, 00:09:44.883 "nvme_error_stat": false, 00:09:44.883 "rdma_srq_size": 0, 00:09:44.883 "io_path_stat": false, 00:09:44.883 "allow_accel_sequence": false, 00:09:44.883 "rdma_max_cq_size": 0, 00:09:44.883 "rdma_cm_event_timeout_ms": 0, 00:09:44.883 "dhchap_digests": [ 00:09:44.883 "sha256", 00:09:44.883 "sha384", 00:09:44.883 "sha512" 00:09:44.883 ], 00:09:44.883 "dhchap_dhgroups": [ 00:09:44.883 "null", 00:09:44.883 "ffdhe2048", 00:09:44.883 "ffdhe3072", 00:09:44.883 "ffdhe4096", 00:09:44.883 "ffdhe6144", 00:09:44.883 "ffdhe8192" 00:09:44.883 ] 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "bdev_nvme_set_hotplug", 00:09:44.883 "params": { 00:09:44.883 "period_us": 100000, 00:09:44.883 "enable": false 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "bdev_iscsi_set_options", 00:09:44.883 "params": { 00:09:44.883 "timeout_sec": 30 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "bdev_wait_for_examine" 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "nvmf", 00:09:44.883 "config": [ 00:09:44.883 { 00:09:44.883 "method": "nvmf_set_config", 00:09:44.883 "params": { 00:09:44.883 "discovery_filter": "match_any", 00:09:44.883 "admin_cmd_passthru": { 00:09:44.883 "identify_ctrlr": false 00:09:44.883 } 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "nvmf_set_max_subsystems", 00:09:44.883 "params": { 00:09:44.883 "max_subsystems": 1024 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "nvmf_set_crdt", 00:09:44.883 "params": { 00:09:44.883 "crdt1": 0, 00:09:44.883 "crdt2": 0, 00:09:44.883 "crdt3": 0 00:09:44.883 } 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "method": "nvmf_create_transport", 00:09:44.883 "params": { 00:09:44.883 "trtype": "TCP", 00:09:44.883 "max_queue_depth": 128, 00:09:44.883 "max_io_qpairs_per_ctrlr": 127, 00:09:44.883 "in_capsule_data_size": 4096, 00:09:44.883 "max_io_size": 131072, 00:09:44.883 "io_unit_size": 131072, 00:09:44.883 "max_aq_depth": 128, 00:09:44.883 "num_shared_buffers": 511, 00:09:44.883 "buf_cache_size": 4294967295, 00:09:44.883 "dif_insert_or_strip": false, 00:09:44.883 "zcopy": false, 00:09:44.883 "c2h_success": true, 00:09:44.883 "sock_priority": 0, 00:09:44.883 "abort_timeout_sec": 1, 00:09:44.883 "ack_timeout": 0, 00:09:44.883 "data_wr_pool_size": 0 00:09:44.883 } 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "nbd", 00:09:44.883 "config": [] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "vhost_blk", 00:09:44.883 "config": [] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "scsi", 00:09:44.883 "config": null 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "iscsi", 00:09:44.883 "config": [ 00:09:44.883 { 00:09:44.883 "method": "iscsi_set_options", 00:09:44.883 "params": { 00:09:44.883 "node_base": "iqn.2016-06.io.spdk", 00:09:44.883 "max_sessions": 128, 00:09:44.883 "max_connections_per_session": 2, 00:09:44.883 "max_queue_depth": 64, 00:09:44.883 "default_time2wait": 2, 00:09:44.883 "default_time2retain": 20, 00:09:44.883 "first_burst_length": 8192, 00:09:44.883 "immediate_data": true, 00:09:44.883 "allow_duplicated_isid": false, 00:09:44.883 "error_recovery_level": 0, 00:09:44.883 "nop_timeout": 60, 00:09:44.883 "nop_in_interval": 30, 00:09:44.883 "disable_chap": false, 00:09:44.883 "require_chap": false, 00:09:44.883 "mutual_chap": false, 00:09:44.883 "chap_group": 0, 00:09:44.883 "max_large_datain_per_connection": 64, 00:09:44.883 "max_r2t_per_connection": 4, 00:09:44.883 "pdu_pool_size": 36864, 00:09:44.883 "immediate_data_pool_size": 16384, 00:09:44.883 "data_out_pool_size": 2048 00:09:44.883 } 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 }, 00:09:44.883 { 00:09:44.883 "subsystem": "vhost_scsi", 00:09:44.883 "config": [] 00:09:44.883 } 00:09:44.883 ] 00:09:44.883 } 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 123756 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 123756 ']' 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 123756 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:44.883 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123756 00:09:44.884 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:44.884 killing process with pid 123756 00:09:44.884 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:44.884 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123756' 00:09:44.884 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 123756 00:09:44.884 11:51:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 123756 00:09:45.479 11:51:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=123789 00:09:45.479 11:51:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:45.479 11:51:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 123789 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 123789 ']' 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 123789 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123789 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:50.760 killing process with pid 123789 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123789' 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 123789 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 123789 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:50.760 00:09:50.760 real 0m7.114s 00:09:50.760 user 0m6.678s 00:09:50.760 sys 0m0.774s 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:50.760 11:51:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:50.760 ************************************ 00:09:50.760 END TEST skip_rpc_with_json 00:09:50.760 ************************************ 00:09:51.018 11:51:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:51.018 11:51:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:51.018 11:51:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.018 11:51:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.018 ************************************ 00:09:51.018 START TEST skip_rpc_with_delay 00:09:51.018 ************************************ 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:51.018 [2024-07-21 11:51:49.719881] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:51.018 [2024-07-21 11:51:49.720134] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:51.018 00:09:51.018 real 0m0.124s 00:09:51.018 user 0m0.075s 00:09:51.018 sys 0m0.050s 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:51.018 11:51:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:51.018 ************************************ 00:09:51.019 END TEST skip_rpc_with_delay 00:09:51.019 ************************************ 00:09:51.019 11:51:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:51.019 11:51:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:51.019 11:51:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:51.019 11:51:49 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:51.019 11:51:49 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:51.019 11:51:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:51.019 ************************************ 00:09:51.019 START TEST exit_on_failed_rpc_init 00:09:51.019 ************************************ 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=123911 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 123911 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 123911 ']' 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:51.019 11:51:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:51.276 [2024-07-21 11:51:49.894068] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:51.276 [2024-07-21 11:51:49.894262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123911 ] 00:09:51.276 [2024-07-21 11:51:50.046722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.276 [2024-07-21 11:51:50.125890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:52.209 11:51:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:52.209 [2024-07-21 11:51:50.955208] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:52.209 [2024-07-21 11:51:50.955452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123936 ] 00:09:52.467 [2024-07-21 11:51:51.122815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.467 [2024-07-21 11:51:51.195663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.467 [2024-07-21 11:51:51.195846] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:52.467 [2024-07-21 11:51:51.195899] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:52.467 [2024-07-21 11:51:51.195944] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 123911 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 123911 ']' 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 123911 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:52.467 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 123911 00:09:52.725 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:52.725 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:52.725 killing process with pid 123911 00:09:52.725 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 123911' 00:09:52.725 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 123911 00:09:52.725 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 123911 00:09:52.983 00:09:52.983 real 0m1.949s 00:09:52.983 user 0m2.227s 00:09:52.983 sys 0m0.515s 00:09:52.983 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.983 11:51:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:52.983 ************************************ 00:09:52.983 END TEST exit_on_failed_rpc_init 00:09:52.983 ************************************ 00:09:52.983 11:51:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:52.983 00:09:52.983 real 0m14.952s 00:09:52.983 user 0m14.145s 00:09:52.983 sys 0m1.838s 00:09:52.983 11:51:51 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:52.983 11:51:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:52.983 ************************************ 00:09:52.983 END TEST skip_rpc 00:09:52.983 ************************************ 00:09:53.241 11:51:51 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:53.241 11:51:51 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:53.241 11:51:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.241 11:51:51 -- common/autotest_common.sh@10 -- # set +x 00:09:53.241 ************************************ 00:09:53.241 START TEST rpc_client 00:09:53.241 ************************************ 00:09:53.241 11:51:51 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:53.241 * Looking for test storage... 00:09:53.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:53.241 11:51:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:53.241 OK 00:09:53.241 11:51:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:53.241 00:09:53.241 real 0m0.136s 00:09:53.241 user 0m0.087s 00:09:53.241 sys 0m0.062s 00:09:53.241 11:51:52 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:53.241 ************************************ 00:09:53.241 END TEST rpc_client 00:09:53.241 ************************************ 00:09:53.241 11:51:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:53.241 11:51:52 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:53.241 11:51:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:53.241 11:51:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:53.241 11:51:52 -- common/autotest_common.sh@10 -- # set +x 00:09:53.241 ************************************ 00:09:53.241 START TEST json_config 00:09:53.241 ************************************ 00:09:53.241 11:51:52 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9feb252d-b89f-4722-9a8b-4c6459ffd1d0 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9feb252d-b89f-4722-9a8b-4c6459ffd1d0 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.500 11:51:52 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.500 11:51:52 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.500 11:51:52 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.500 11:51:52 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:53.500 11:51:52 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:53.500 11:51:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:53.500 11:51:52 json_config -- paths/export.sh@5 -- # export PATH 00:09:53.500 11:51:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@47 -- # : 0 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.500 11:51:52 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:53.500 INFO: JSON configuration test init 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:09:53.500 11:51:52 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:53.500 11:51:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:09:53.500 11:51:52 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:53.500 11:51:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.500 11:51:52 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:09:53.500 11:51:52 json_config -- json_config/common.sh@9 -- # local app=target 00:09:53.500 11:51:52 json_config -- json_config/common.sh@10 -- # shift 00:09:53.500 11:51:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:53.500 11:51:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:53.500 11:51:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:53.500 11:51:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:53.500 11:51:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:53.500 11:51:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=124066 00:09:53.500 11:51:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:53.500 Waiting for target to run... 00:09:53.500 11:51:52 json_config -- json_config/common.sh@25 -- # waitforlisten 124066 /var/tmp/spdk_tgt.sock 00:09:53.500 11:51:52 json_config -- common/autotest_common.sh@827 -- # '[' -z 124066 ']' 00:09:53.500 11:51:52 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:53.500 11:51:52 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:53.500 11:51:52 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:53.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:53.501 11:51:52 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:53.501 11:51:52 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:53.501 11:51:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:53.501 [2024-07-21 11:51:52.214749] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:53.501 [2024-07-21 11:51:52.215242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124066 ] 00:09:54.066 [2024-07-21 11:51:52.655614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.066 [2024-07-21 11:51:52.718245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.631 11:51:53 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:54.631 11:51:53 json_config -- common/autotest_common.sh@860 -- # return 0 00:09:54.631 00:09:54.631 11:51:53 json_config -- json_config/common.sh@26 -- # echo '' 00:09:54.632 11:51:53 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:09:54.632 11:51:53 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:09:54.632 11:51:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:54.632 11:51:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.632 11:51:53 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:09:54.632 11:51:53 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:09:54.632 11:51:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:54.632 11:51:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.632 11:51:53 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:54.632 11:51:53 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:09:54.632 11:51:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:54.889 11:51:53 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:09:54.889 11:51:53 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:54.889 11:51:53 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:54.889 11:51:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:54.889 11:51:53 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:54.889 11:51:53 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:54.889 11:51:53 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:54.889 11:51:53 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:54.889 11:51:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:54.889 11:51:53 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:55.145 11:51:53 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:55.145 11:51:53 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:55.145 11:51:53 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:09:55.145 11:51:53 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:09:55.145 11:51:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.145 11:51:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@55 -- # return 0 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:09:55.402 11:51:54 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:55.402 11:51:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:55.402 11:51:54 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:55.402 11:51:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:55.658 11:51:54 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:55.658 11:51:54 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:55.658 11:51:54 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:55.658 11:51:54 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:09:55.658 11:51:54 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:09:55.659 11:51:54 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:09:55.659 11:51:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:09:55.915 Nvme0n1p0 Nvme0n1p1 00:09:55.915 11:51:54 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:09:55.915 11:51:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:09:55.915 [2024-07-21 11:51:54.771919] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:55.915 [2024-07-21 11:51:54.772345] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:09:55.915 00:09:56.172 11:51:54 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:09:56.172 11:51:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:09:56.172 Malloc3 00:09:56.172 11:51:54 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:56.172 11:51:54 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:09:56.429 [2024-07-21 11:51:55.183963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:56.429 [2024-07-21 11:51:55.184322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.429 [2024-07-21 11:51:55.184422] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:56.429 [2024-07-21 11:51:55.184746] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.429 [2024-07-21 11:51:55.187307] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.429 [2024-07-21 11:51:55.187484] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:09:56.429 PTBdevFromMalloc3 00:09:56.429 11:51:55 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:09:56.429 11:51:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:09:56.686 Null0 00:09:56.686 11:51:55 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:09:56.686 11:51:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:09:56.943 Malloc0 00:09:56.943 11:51:55 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:09:56.943 11:51:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:09:57.201 Malloc1 00:09:57.201 11:51:55 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:09:57.201 11:51:55 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:09:57.457 102400+0 records in 00:09:57.457 102400+0 records out 00:09:57.458 104857600 bytes (105 MB, 100 MiB) copied, 0.268996 s, 390 MB/s 00:09:57.458 11:51:56 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:09:57.458 11:51:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:09:57.714 aio_disk 00:09:57.714 11:51:56 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:09:57.714 11:51:56 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:57.714 11:51:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:09:57.977 72aa4fa4-5453-464b-8c9f-c8b4980696e9 00:09:57.977 11:51:56 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:09:57.977 11:51:56 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:09:57.977 11:51:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:09:58.262 11:51:56 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:09:58.262 11:51:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:09:58.262 11:51:57 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:58.262 11:51:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:09:58.524 11:51:57 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:58.524 11:51:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:130a5565-dcb5-4fe0-8bf7-7c1cbaae217e bdev_register:b32ae944-2cd2-44e7-9ae8-1dbfd8e7faf8 bdev_register:502d593b-eb3d-4307-9a5d-9581db028641 bdev_register:ebc4e77e-0b71-4219-8164-26db3ea77556 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:130a5565-dcb5-4fe0-8bf7-7c1cbaae217e bdev_register:b32ae944-2cd2-44e7-9ae8-1dbfd8e7faf8 bdev_register:502d593b-eb3d-4307-9a5d-9581db028641 bdev_register:ebc4e77e-0b71-4219-8164-26db3ea77556 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@71 -- # sort 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@72 -- # sort 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:09:58.782 11:51:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:09:58.782 11:51:57 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.039 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:09:59.040 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:130a5565-dcb5-4fe0-8bf7-7c1cbaae217e 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:b32ae944-2cd2-44e7-9ae8-1dbfd8e7faf8 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:502d593b-eb3d-4307-9a5d-9581db028641 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:ebc4e77e-0b71-4219-8164-26db3ea77556 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:130a5565-dcb5-4fe0-8bf7-7c1cbaae217e bdev_register:502d593b-eb3d-4307-9a5d-9581db028641 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b32ae944-2cd2-44e7-9ae8-1dbfd8e7faf8 bdev_register:ebc4e77e-0b71-4219-8164-26db3ea77556 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\3\0\a\5\5\6\5\-\d\c\b\5\-\4\f\e\0\-\8\b\f\7\-\7\c\1\c\b\a\a\e\2\1\7\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\0\2\d\5\9\3\b\-\e\b\3\d\-\4\3\0\7\-\9\a\5\d\-\9\5\8\1\d\b\0\2\8\6\4\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\3\2\a\e\9\4\4\-\2\c\d\2\-\4\4\e\7\-\9\a\e\8\-\1\d\b\f\d\8\e\7\f\a\f\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\b\c\4\e\7\7\e\-\0\b\7\1\-\4\2\1\9\-\8\1\6\4\-\2\6\d\b\3\e\a\7\7\5\5\6 ]] 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@86 -- # cat 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:130a5565-dcb5-4fe0-8bf7-7c1cbaae217e bdev_register:502d593b-eb3d-4307-9a5d-9581db028641 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:b32ae944-2cd2-44e7-9ae8-1dbfd8e7faf8 bdev_register:ebc4e77e-0b71-4219-8164-26db3ea77556 00:09:59.297 Expected events matched: 00:09:59.297 bdev_register:130a5565-dcb5-4fe0-8bf7-7c1cbaae217e 00:09:59.297 bdev_register:502d593b-eb3d-4307-9a5d-9581db028641 00:09:59.297 bdev_register:Malloc0 00:09:59.297 bdev_register:Malloc0p0 00:09:59.297 bdev_register:Malloc0p1 00:09:59.297 bdev_register:Malloc0p2 00:09:59.297 bdev_register:Malloc1 00:09:59.297 bdev_register:Malloc3 00:09:59.297 bdev_register:Null0 00:09:59.297 bdev_register:Nvme0n1 00:09:59.297 bdev_register:Nvme0n1p0 00:09:59.297 bdev_register:Nvme0n1p1 00:09:59.297 bdev_register:PTBdevFromMalloc3 00:09:59.297 bdev_register:aio_disk 00:09:59.297 bdev_register:b32ae944-2cd2-44e7-9ae8-1dbfd8e7faf8 00:09:59.297 bdev_register:ebc4e77e-0b71-4219-8164-26db3ea77556 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:09:59.297 11:51:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.297 11:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:59.297 11:51:57 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:09:59.297 11:51:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.297 11:51:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.297 11:51:58 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:09:59.297 11:51:58 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:59.297 11:51:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:59.555 MallocBdevForConfigChangeCheck 00:09:59.555 11:51:58 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:09:59.555 11:51:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.555 11:51:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:59.555 11:51:58 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:09:59.555 11:51:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:00.120 11:51:58 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:10:00.120 INFO: shutting down applications... 00:10:00.120 11:51:58 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:10:00.120 11:51:58 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:10:00.120 11:51:58 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:10:00.120 11:51:58 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:10:00.120 [2024-07-21 11:51:58.840829] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:10:00.378 Calling clear_vhost_scsi_subsystem 00:10:00.378 Calling clear_iscsi_subsystem 00:10:00.378 Calling clear_vhost_blk_subsystem 00:10:00.378 Calling clear_nbd_subsystem 00:10:00.378 Calling clear_nvmf_subsystem 00:10:00.378 Calling clear_bdev_subsystem 00:10:00.378 11:51:58 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:10:00.378 11:51:59 json_config -- json_config/json_config.sh@343 -- # count=100 00:10:00.378 11:51:59 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:10:00.378 11:51:59 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:00.378 11:51:59 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:10:00.378 11:51:59 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:10:00.636 11:51:59 json_config -- json_config/json_config.sh@345 -- # break 00:10:00.636 11:51:59 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:10:00.636 11:51:59 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:10:00.636 11:51:59 json_config -- json_config/common.sh@31 -- # local app=target 00:10:00.636 11:51:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:00.636 11:51:59 json_config -- json_config/common.sh@35 -- # [[ -n 124066 ]] 00:10:00.636 11:51:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 124066 00:10:00.636 11:51:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:00.636 11:51:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:00.636 11:51:59 json_config -- json_config/common.sh@41 -- # kill -0 124066 00:10:00.636 11:51:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:10:01.202 11:51:59 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:10:01.202 11:51:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:01.202 11:51:59 json_config -- json_config/common.sh@41 -- # kill -0 124066 00:10:01.202 11:51:59 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:01.202 11:51:59 json_config -- json_config/common.sh@43 -- # break 00:10:01.202 11:51:59 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:01.202 SPDK target shutdown done 00:10:01.202 11:51:59 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:01.202 INFO: relaunching applications... 00:10:01.202 11:51:59 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:10:01.202 11:51:59 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:01.202 11:51:59 json_config -- json_config/common.sh@9 -- # local app=target 00:10:01.202 11:51:59 json_config -- json_config/common.sh@10 -- # shift 00:10:01.202 11:51:59 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:01.202 11:51:59 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:01.202 11:51:59 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:10:01.202 11:51:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.202 11:51:59 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:01.202 11:51:59 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=124317 00:10:01.202 Waiting for target to run... 00:10:01.202 11:51:59 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:01.202 11:51:59 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:01.202 11:51:59 json_config -- json_config/common.sh@25 -- # waitforlisten 124317 /var/tmp/spdk_tgt.sock 00:10:01.202 11:51:59 json_config -- common/autotest_common.sh@827 -- # '[' -z 124317 ']' 00:10:01.202 11:51:59 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:01.202 11:51:59 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:01.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:01.202 11:51:59 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:01.202 11:51:59 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:01.202 11:51:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:01.202 [2024-07-21 11:51:59.935461] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:01.202 [2024-07-21 11:51:59.935733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124317 ] 00:10:01.767 [2024-07-21 11:52:00.402529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.767 [2024-07-21 11:52:00.468366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.767 [2024-07-21 11:52:00.631141] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:01.767 [2024-07-21 11:52:00.631496] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:10:02.024 [2024-07-21 11:52:00.639113] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:02.024 [2024-07-21 11:52:00.639345] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:10:02.024 [2024-07-21 11:52:00.647145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:02.024 [2024-07-21 11:52:00.647416] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:10:02.024 [2024-07-21 11:52:00.647562] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:10:02.024 [2024-07-21 11:52:00.732603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:10:02.024 [2024-07-21 11:52:00.733069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:02.024 [2024-07-21 11:52:00.733239] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:02.024 [2024-07-21 11:52:00.733416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:02.024 [2024-07-21 11:52:00.734179] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:02.024 [2024-07-21 11:52:00.734349] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:10:02.281 11:52:00 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:02.281 11:52:00 json_config -- common/autotest_common.sh@860 -- # return 0 00:10:02.281 00:10:02.281 11:52:00 json_config -- json_config/common.sh@26 -- # echo '' 00:10:02.281 11:52:00 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:10:02.281 11:52:00 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:10:02.281 INFO: Checking if target configuration is the same... 00:10:02.281 11:52:00 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.281 11:52:00 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:10:02.281 11:52:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:02.281 + '[' 2 -ne 2 ']' 00:10:02.281 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:02.281 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:02.281 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:02.281 +++ basename /dev/fd/62 00:10:02.281 ++ mktemp /tmp/62.XXX 00:10:02.282 + tmp_file_1=/tmp/62.fTm 00:10:02.282 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.282 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:02.282 + tmp_file_2=/tmp/spdk_tgt_config.json.ddQ 00:10:02.282 + ret=0 00:10:02.282 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:02.539 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:02.539 + diff -u /tmp/62.fTm /tmp/spdk_tgt_config.json.ddQ 00:10:02.539 + echo 'INFO: JSON config files are the same' 00:10:02.539 INFO: JSON config files are the same 00:10:02.539 + rm /tmp/62.fTm /tmp/spdk_tgt_config.json.ddQ 00:10:02.539 + exit 0 00:10:02.539 11:52:01 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:10:02.539 11:52:01 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:10:02.539 INFO: changing configuration and checking if this can be detected... 00:10:02.539 11:52:01 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:02.539 11:52:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:10:02.796 11:52:01 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.796 11:52:01 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:10:02.796 11:52:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:10:02.796 + '[' 2 -ne 2 ']' 00:10:02.796 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:10:02.796 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:10:02.796 + rootdir=/home/vagrant/spdk_repo/spdk 00:10:02.796 +++ basename /dev/fd/62 00:10:02.796 ++ mktemp /tmp/62.XXX 00:10:02.796 + tmp_file_1=/tmp/62.sAM 00:10:02.796 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:02.796 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:10:02.796 + tmp_file_2=/tmp/spdk_tgt_config.json.mIj 00:10:02.796 + ret=0 00:10:02.796 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:03.054 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:10:03.054 + diff -u /tmp/62.sAM /tmp/spdk_tgt_config.json.mIj 00:10:03.054 + ret=1 00:10:03.054 + echo '=== Start of file: /tmp/62.sAM ===' 00:10:03.054 + cat /tmp/62.sAM 00:10:03.054 + echo '=== End of file: /tmp/62.sAM ===' 00:10:03.054 + echo '' 00:10:03.054 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mIj ===' 00:10:03.054 + cat /tmp/spdk_tgt_config.json.mIj 00:10:03.054 + echo '=== End of file: /tmp/spdk_tgt_config.json.mIj ===' 00:10:03.054 + echo '' 00:10:03.054 + rm /tmp/62.sAM /tmp/spdk_tgt_config.json.mIj 00:10:03.312 + exit 1 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:10:03.312 INFO: configuration change detected. 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:10:03.312 11:52:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:03.312 11:52:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@317 -- # [[ -n 124317 ]] 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:10:03.312 11:52:01 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:03.312 11:52:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:10:03.312 11:52:01 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:10:03.312 11:52:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:10:03.312 11:52:02 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:10:03.312 11:52:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:10:03.569 11:52:02 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:10:03.569 11:52:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:10:03.825 11:52:02 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:10:03.825 11:52:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:10:04.082 11:52:02 json_config -- json_config/json_config.sh@193 -- # uname -s 00:10:04.082 11:52:02 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:10:04.082 11:52:02 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:10:04.082 11:52:02 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:10:04.082 11:52:02 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.082 11:52:02 json_config -- json_config/json_config.sh@323 -- # killprocess 124317 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@946 -- # '[' -z 124317 ']' 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@950 -- # kill -0 124317 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@951 -- # uname 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124317 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124317' 00:10:04.082 killing process with pid 124317 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@965 -- # kill 124317 00:10:04.082 11:52:02 json_config -- common/autotest_common.sh@970 -- # wait 124317 00:10:04.647 11:52:03 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:10:04.647 11:52:03 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:10:04.647 11:52:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:04.647 11:52:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 11:52:03 json_config -- json_config/json_config.sh@328 -- # return 0 00:10:04.647 INFO: Success 00:10:04.647 11:52:03 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:10:04.647 ************************************ 00:10:04.647 END TEST json_config 00:10:04.647 ************************************ 00:10:04.647 00:10:04.647 real 0m11.236s 00:10:04.647 user 0m17.135s 00:10:04.647 sys 0m2.333s 00:10:04.647 11:52:03 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:04.647 11:52:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 11:52:03 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:04.647 11:52:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:04.647 11:52:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:04.647 11:52:03 -- common/autotest_common.sh@10 -- # set +x 00:10:04.647 ************************************ 00:10:04.647 START TEST json_config_extra_key 00:10:04.647 ************************************ 00:10:04.647 11:52:03 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1e62daab-8bca-4b07-9f45-eef4a1b847df 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1e62daab-8bca-4b07-9f45-eef4a1b847df 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:04.647 11:52:03 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.647 11:52:03 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.647 11:52:03 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.647 11:52:03 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.647 11:52:03 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.647 11:52:03 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.647 11:52:03 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:10:04.647 11:52:03 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.647 11:52:03 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:10:04.647 INFO: launching applications... 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:10:04.647 11:52:03 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=124483 00:10:04.647 Waiting for target to run... 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 124483 /var/tmp/spdk_tgt.sock 00:10:04.647 11:52:03 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 124483 ']' 00:10:04.647 11:52:03 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:10:04.647 11:52:03 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:10:04.648 11:52:03 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:04.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:10:04.648 11:52:03 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:10:04.648 11:52:03 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:04.648 11:52:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:04.648 [2024-07-21 11:52:03.497040] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:04.648 [2024-07-21 11:52:03.497237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124483 ] 00:10:05.211 [2024-07-21 11:52:03.934182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.211 [2024-07-21 11:52:03.996295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.775 11:52:04 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:05.775 00:10:05.775 11:52:04 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:10:05.775 INFO: shutting down applications... 00:10:05.775 11:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:10:05.775 11:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 124483 ]] 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 124483 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124483 00:10:05.775 11:52:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:10:06.340 11:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:10:06.340 11:52:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:10:06.340 11:52:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124483 00:10:06.340 11:52:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:10:06.340 11:52:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:10:06.340 11:52:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:10:06.340 SPDK target shutdown done 00:10:06.340 11:52:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:10:06.340 Success 00:10:06.340 11:52:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:10:06.340 00:10:06.340 real 0m1.623s 00:10:06.340 user 0m1.604s 00:10:06.340 sys 0m0.425s 00:10:06.340 11:52:04 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:06.340 ************************************ 00:10:06.340 END TEST json_config_extra_key 00:10:06.340 ************************************ 00:10:06.340 11:52:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 11:52:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:06.340 11:52:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:06.340 11:52:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:06.340 11:52:05 -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 ************************************ 00:10:06.340 START TEST alias_rpc 00:10:06.340 ************************************ 00:10:06.340 11:52:05 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:10:06.340 * Looking for test storage... 00:10:06.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:10:06.340 11:52:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:06.340 11:52:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=124558 00:10:06.340 11:52:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 124558 00:10:06.340 11:52:05 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 124558 ']' 00:10:06.340 11:52:05 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.340 11:52:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:06.340 11:52:05 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:06.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.340 11:52:05 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.340 11:52:05 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:06.340 11:52:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.340 [2024-07-21 11:52:05.196709] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:06.340 [2024-07-21 11:52:05.196988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124558 ] 00:10:06.598 [2024-07-21 11:52:05.359268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.598 [2024-07-21 11:52:05.450231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.533 11:52:06 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:07.533 11:52:06 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:07.533 11:52:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:10:07.790 11:52:06 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 124558 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 124558 ']' 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 124558 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124558 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:07.790 killing process with pid 124558 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124558' 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@965 -- # kill 124558 00:10:07.790 11:52:06 alias_rpc -- common/autotest_common.sh@970 -- # wait 124558 00:10:08.049 00:10:08.049 real 0m1.856s 00:10:08.049 user 0m2.033s 00:10:08.049 sys 0m0.497s 00:10:08.049 11:52:06 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:08.049 ************************************ 00:10:08.049 END TEST alias_rpc 00:10:08.049 ************************************ 00:10:08.049 11:52:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:08.307 11:52:06 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:10:08.307 11:52:06 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:08.307 11:52:06 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:08.307 11:52:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:08.307 11:52:06 -- common/autotest_common.sh@10 -- # set +x 00:10:08.307 ************************************ 00:10:08.307 START TEST spdkcli_tcp 00:10:08.307 ************************************ 00:10:08.307 11:52:06 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:10:08.307 * Looking for test storage... 00:10:08.307 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=124645 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:10:08.307 11:52:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 124645 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 124645 ']' 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:08.307 11:52:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:08.307 [2024-07-21 11:52:07.102903] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:08.307 [2024-07-21 11:52:07.103574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124645 ] 00:10:08.566 [2024-07-21 11:52:07.278168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:08.566 [2024-07-21 11:52:07.365707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.566 [2024-07-21 11:52:07.365710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.547 11:52:08 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:09.547 11:52:08 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:10:09.547 11:52:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=124667 00:10:09.547 11:52:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:10:09.547 11:52:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:10:09.547 [ 00:10:09.547 "spdk_get_version", 00:10:09.547 "rpc_get_methods", 00:10:09.547 "keyring_get_keys", 00:10:09.547 "trace_get_info", 00:10:09.547 "trace_get_tpoint_group_mask", 00:10:09.547 "trace_disable_tpoint_group", 00:10:09.547 "trace_enable_tpoint_group", 00:10:09.547 "trace_clear_tpoint_mask", 00:10:09.547 "trace_set_tpoint_mask", 00:10:09.547 "framework_get_pci_devices", 00:10:09.547 "framework_get_config", 00:10:09.547 "framework_get_subsystems", 00:10:09.547 "iobuf_get_stats", 00:10:09.547 "iobuf_set_options", 00:10:09.547 "sock_get_default_impl", 00:10:09.547 "sock_set_default_impl", 00:10:09.547 "sock_impl_set_options", 00:10:09.547 "sock_impl_get_options", 00:10:09.547 "vmd_rescan", 00:10:09.547 "vmd_remove_device", 00:10:09.547 "vmd_enable", 00:10:09.547 "accel_get_stats", 00:10:09.547 "accel_set_options", 00:10:09.547 "accel_set_driver", 00:10:09.547 "accel_crypto_key_destroy", 00:10:09.547 "accel_crypto_keys_get", 00:10:09.547 "accel_crypto_key_create", 00:10:09.547 "accel_assign_opc", 00:10:09.547 "accel_get_module_info", 00:10:09.547 "accel_get_opc_assignments", 00:10:09.547 "notify_get_notifications", 00:10:09.547 "notify_get_types", 00:10:09.547 "bdev_get_histogram", 00:10:09.547 "bdev_enable_histogram", 00:10:09.547 "bdev_set_qos_limit", 00:10:09.547 "bdev_set_qd_sampling_period", 00:10:09.547 "bdev_get_bdevs", 00:10:09.547 "bdev_reset_iostat", 00:10:09.547 "bdev_get_iostat", 00:10:09.547 "bdev_examine", 00:10:09.547 "bdev_wait_for_examine", 00:10:09.547 "bdev_set_options", 00:10:09.547 "scsi_get_devices", 00:10:09.547 "thread_set_cpumask", 00:10:09.547 "framework_get_scheduler", 00:10:09.547 "framework_set_scheduler", 00:10:09.547 "framework_get_reactors", 00:10:09.547 "thread_get_io_channels", 00:10:09.547 "thread_get_pollers", 00:10:09.547 "thread_get_stats", 00:10:09.547 "framework_monitor_context_switch", 00:10:09.547 "spdk_kill_instance", 00:10:09.547 "log_enable_timestamps", 00:10:09.547 "log_get_flags", 00:10:09.547 "log_clear_flag", 00:10:09.547 "log_set_flag", 00:10:09.547 "log_get_level", 00:10:09.547 "log_set_level", 00:10:09.547 "log_get_print_level", 00:10:09.547 "log_set_print_level", 00:10:09.547 "framework_enable_cpumask_locks", 00:10:09.547 "framework_disable_cpumask_locks", 00:10:09.547 "framework_wait_init", 00:10:09.547 "framework_start_init", 00:10:09.547 "virtio_blk_create_transport", 00:10:09.547 "virtio_blk_get_transports", 00:10:09.547 "vhost_controller_set_coalescing", 00:10:09.547 "vhost_get_controllers", 00:10:09.547 "vhost_delete_controller", 00:10:09.547 "vhost_create_blk_controller", 00:10:09.547 "vhost_scsi_controller_remove_target", 00:10:09.547 "vhost_scsi_controller_add_target", 00:10:09.547 "vhost_start_scsi_controller", 00:10:09.547 "vhost_create_scsi_controller", 00:10:09.547 "nbd_get_disks", 00:10:09.547 "nbd_stop_disk", 00:10:09.547 "nbd_start_disk", 00:10:09.547 "env_dpdk_get_mem_stats", 00:10:09.547 "nvmf_stop_mdns_prr", 00:10:09.547 "nvmf_publish_mdns_prr", 00:10:09.547 "nvmf_subsystem_get_listeners", 00:10:09.547 "nvmf_subsystem_get_qpairs", 00:10:09.547 "nvmf_subsystem_get_controllers", 00:10:09.547 "nvmf_get_stats", 00:10:09.547 "nvmf_get_transports", 00:10:09.547 "nvmf_create_transport", 00:10:09.547 "nvmf_get_targets", 00:10:09.547 "nvmf_delete_target", 00:10:09.547 "nvmf_create_target", 00:10:09.547 "nvmf_subsystem_allow_any_host", 00:10:09.547 "nvmf_subsystem_remove_host", 00:10:09.547 "nvmf_subsystem_add_host", 00:10:09.547 "nvmf_ns_remove_host", 00:10:09.547 "nvmf_ns_add_host", 00:10:09.547 "nvmf_subsystem_remove_ns", 00:10:09.547 "nvmf_subsystem_add_ns", 00:10:09.547 "nvmf_subsystem_listener_set_ana_state", 00:10:09.547 "nvmf_discovery_get_referrals", 00:10:09.547 "nvmf_discovery_remove_referral", 00:10:09.547 "nvmf_discovery_add_referral", 00:10:09.547 "nvmf_subsystem_remove_listener", 00:10:09.547 "nvmf_subsystem_add_listener", 00:10:09.547 "nvmf_delete_subsystem", 00:10:09.547 "nvmf_create_subsystem", 00:10:09.547 "nvmf_get_subsystems", 00:10:09.547 "nvmf_set_crdt", 00:10:09.547 "nvmf_set_config", 00:10:09.547 "nvmf_set_max_subsystems", 00:10:09.547 "iscsi_get_histogram", 00:10:09.547 "iscsi_enable_histogram", 00:10:09.547 "iscsi_set_options", 00:10:09.547 "iscsi_get_auth_groups", 00:10:09.547 "iscsi_auth_group_remove_secret", 00:10:09.547 "iscsi_auth_group_add_secret", 00:10:09.547 "iscsi_delete_auth_group", 00:10:09.547 "iscsi_create_auth_group", 00:10:09.547 "iscsi_set_discovery_auth", 00:10:09.547 "iscsi_get_options", 00:10:09.547 "iscsi_target_node_request_logout", 00:10:09.547 "iscsi_target_node_set_redirect", 00:10:09.547 "iscsi_target_node_set_auth", 00:10:09.547 "iscsi_target_node_add_lun", 00:10:09.547 "iscsi_get_stats", 00:10:09.547 "iscsi_get_connections", 00:10:09.547 "iscsi_portal_group_set_auth", 00:10:09.547 "iscsi_start_portal_group", 00:10:09.547 "iscsi_delete_portal_group", 00:10:09.547 "iscsi_create_portal_group", 00:10:09.547 "iscsi_get_portal_groups", 00:10:09.547 "iscsi_delete_target_node", 00:10:09.547 "iscsi_target_node_remove_pg_ig_maps", 00:10:09.547 "iscsi_target_node_add_pg_ig_maps", 00:10:09.547 "iscsi_create_target_node", 00:10:09.547 "iscsi_get_target_nodes", 00:10:09.547 "iscsi_delete_initiator_group", 00:10:09.547 "iscsi_initiator_group_remove_initiators", 00:10:09.547 "iscsi_initiator_group_add_initiators", 00:10:09.547 "iscsi_create_initiator_group", 00:10:09.548 "iscsi_get_initiator_groups", 00:10:09.548 "keyring_linux_set_options", 00:10:09.548 "keyring_file_remove_key", 00:10:09.548 "keyring_file_add_key", 00:10:09.548 "iaa_scan_accel_module", 00:10:09.548 "dsa_scan_accel_module", 00:10:09.548 "ioat_scan_accel_module", 00:10:09.548 "accel_error_inject_error", 00:10:09.548 "bdev_iscsi_delete", 00:10:09.548 "bdev_iscsi_create", 00:10:09.548 "bdev_iscsi_set_options", 00:10:09.548 "bdev_virtio_attach_controller", 00:10:09.548 "bdev_virtio_scsi_get_devices", 00:10:09.548 "bdev_virtio_detach_controller", 00:10:09.548 "bdev_virtio_blk_set_hotplug", 00:10:09.548 "bdev_ftl_set_property", 00:10:09.548 "bdev_ftl_get_properties", 00:10:09.548 "bdev_ftl_get_stats", 00:10:09.548 "bdev_ftl_unmap", 00:10:09.548 "bdev_ftl_unload", 00:10:09.548 "bdev_ftl_delete", 00:10:09.548 "bdev_ftl_load", 00:10:09.548 "bdev_ftl_create", 00:10:09.548 "bdev_aio_delete", 00:10:09.548 "bdev_aio_rescan", 00:10:09.548 "bdev_aio_create", 00:10:09.548 "blobfs_create", 00:10:09.548 "blobfs_detect", 00:10:09.548 "blobfs_set_cache_size", 00:10:09.548 "bdev_zone_block_delete", 00:10:09.548 "bdev_zone_block_create", 00:10:09.548 "bdev_delay_delete", 00:10:09.548 "bdev_delay_create", 00:10:09.548 "bdev_delay_update_latency", 00:10:09.548 "bdev_split_delete", 00:10:09.548 "bdev_split_create", 00:10:09.548 "bdev_error_inject_error", 00:10:09.548 "bdev_error_delete", 00:10:09.548 "bdev_error_create", 00:10:09.548 "bdev_raid_set_options", 00:10:09.548 "bdev_raid_remove_base_bdev", 00:10:09.548 "bdev_raid_add_base_bdev", 00:10:09.548 "bdev_raid_delete", 00:10:09.548 "bdev_raid_create", 00:10:09.548 "bdev_raid_get_bdevs", 00:10:09.548 "bdev_lvol_set_parent_bdev", 00:10:09.548 "bdev_lvol_set_parent", 00:10:09.548 "bdev_lvol_check_shallow_copy", 00:10:09.548 "bdev_lvol_start_shallow_copy", 00:10:09.548 "bdev_lvol_grow_lvstore", 00:10:09.548 "bdev_lvol_get_lvols", 00:10:09.548 "bdev_lvol_get_lvstores", 00:10:09.548 "bdev_lvol_delete", 00:10:09.548 "bdev_lvol_set_read_only", 00:10:09.548 "bdev_lvol_resize", 00:10:09.548 "bdev_lvol_decouple_parent", 00:10:09.548 "bdev_lvol_inflate", 00:10:09.548 "bdev_lvol_rename", 00:10:09.548 "bdev_lvol_clone_bdev", 00:10:09.548 "bdev_lvol_clone", 00:10:09.548 "bdev_lvol_snapshot", 00:10:09.548 "bdev_lvol_create", 00:10:09.548 "bdev_lvol_delete_lvstore", 00:10:09.548 "bdev_lvol_rename_lvstore", 00:10:09.548 "bdev_lvol_create_lvstore", 00:10:09.548 "bdev_passthru_delete", 00:10:09.548 "bdev_passthru_create", 00:10:09.548 "bdev_nvme_cuse_unregister", 00:10:09.548 "bdev_nvme_cuse_register", 00:10:09.548 "bdev_opal_new_user", 00:10:09.548 "bdev_opal_set_lock_state", 00:10:09.548 "bdev_opal_delete", 00:10:09.548 "bdev_opal_get_info", 00:10:09.548 "bdev_opal_create", 00:10:09.548 "bdev_nvme_opal_revert", 00:10:09.548 "bdev_nvme_opal_init", 00:10:09.548 "bdev_nvme_send_cmd", 00:10:09.548 "bdev_nvme_get_path_iostat", 00:10:09.548 "bdev_nvme_get_mdns_discovery_info", 00:10:09.548 "bdev_nvme_stop_mdns_discovery", 00:10:09.548 "bdev_nvme_start_mdns_discovery", 00:10:09.548 "bdev_nvme_set_multipath_policy", 00:10:09.548 "bdev_nvme_set_preferred_path", 00:10:09.548 "bdev_nvme_get_io_paths", 00:10:09.548 "bdev_nvme_remove_error_injection", 00:10:09.548 "bdev_nvme_add_error_injection", 00:10:09.548 "bdev_nvme_get_discovery_info", 00:10:09.548 "bdev_nvme_stop_discovery", 00:10:09.548 "bdev_nvme_start_discovery", 00:10:09.548 "bdev_nvme_get_controller_health_info", 00:10:09.548 "bdev_nvme_disable_controller", 00:10:09.548 "bdev_nvme_enable_controller", 00:10:09.548 "bdev_nvme_reset_controller", 00:10:09.548 "bdev_nvme_get_transport_statistics", 00:10:09.548 "bdev_nvme_apply_firmware", 00:10:09.548 "bdev_nvme_detach_controller", 00:10:09.548 "bdev_nvme_get_controllers", 00:10:09.548 "bdev_nvme_attach_controller", 00:10:09.548 "bdev_nvme_set_hotplug", 00:10:09.548 "bdev_nvme_set_options", 00:10:09.548 "bdev_null_resize", 00:10:09.548 "bdev_null_delete", 00:10:09.548 "bdev_null_create", 00:10:09.548 "bdev_malloc_delete", 00:10:09.548 "bdev_malloc_create" 00:10:09.548 ] 00:10:09.548 11:52:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:10:09.548 11:52:08 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:09.548 11:52:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.818 11:52:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:09.818 11:52:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 124645 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 124645 ']' 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 124645 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124645 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:09.818 killing process with pid 124645 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124645' 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 124645 00:10:09.818 11:52:08 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 124645 00:10:10.075 00:10:10.075 real 0m1.941s 00:10:10.075 user 0m3.617s 00:10:10.075 sys 0m0.491s 00:10:10.075 11:52:08 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:10.075 11:52:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:10.075 ************************************ 00:10:10.075 END TEST spdkcli_tcp 00:10:10.075 ************************************ 00:10:10.075 11:52:08 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:10.075 11:52:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:10.075 11:52:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:10.075 11:52:08 -- common/autotest_common.sh@10 -- # set +x 00:10:10.075 ************************************ 00:10:10.075 START TEST dpdk_mem_utility 00:10:10.075 ************************************ 00:10:10.332 11:52:08 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:10:10.332 * Looking for test storage... 00:10:10.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:10:10.332 11:52:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:10.332 11:52:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=124747 00:10:10.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.332 11:52:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 124747 00:10:10.332 11:52:09 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 124747 ']' 00:10:10.332 11:52:09 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.332 11:52:09 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:10.332 11:52:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.332 11:52:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:10.332 11:52:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:10.332 11:52:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:10.332 [2024-07-21 11:52:09.100394] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:10.332 [2024-07-21 11:52:09.100673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124747 ] 00:10:10.589 [2024-07-21 11:52:09.271842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.589 [2024-07-21 11:52:09.361439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.523 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:11.523 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:10:11.523 11:52:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:10:11.523 11:52:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:10:11.523 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.523 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:11.523 { 00:10:11.523 "filename": "/tmp/spdk_mem_dump.txt" 00:10:11.523 } 00:10:11.523 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.523 11:52:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:10:11.523 DPDK memory size 814.000000 MiB in 1 heap(s) 00:10:11.523 1 heaps totaling size 814.000000 MiB 00:10:11.523 size: 814.000000 MiB heap id: 0 00:10:11.523 end heaps---------- 00:10:11.523 8 mempools totaling size 598.116089 MiB 00:10:11.523 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:10:11.523 size: 158.602051 MiB name: PDU_data_out_Pool 00:10:11.523 size: 84.521057 MiB name: bdev_io_124747 00:10:11.523 size: 51.011292 MiB name: evtpool_124747 00:10:11.523 size: 50.003479 MiB name: msgpool_124747 00:10:11.523 size: 21.763794 MiB name: PDU_Pool 00:10:11.523 size: 19.513306 MiB name: SCSI_TASK_Pool 00:10:11.523 size: 0.026123 MiB name: Session_Pool 00:10:11.523 end mempools------- 00:10:11.523 6 memzones totaling size 4.142822 MiB 00:10:11.523 size: 1.000366 MiB name: RG_ring_0_124747 00:10:11.523 size: 1.000366 MiB name: RG_ring_1_124747 00:10:11.523 size: 1.000366 MiB name: RG_ring_4_124747 00:10:11.523 size: 1.000366 MiB name: RG_ring_5_124747 00:10:11.523 size: 0.125366 MiB name: RG_ring_2_124747 00:10:11.523 size: 0.015991 MiB name: RG_ring_3_124747 00:10:11.523 end memzones------- 00:10:11.523 11:52:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:10:11.523 heap id: 0 total size: 814.000000 MiB number of busy elements: 219 number of free elements: 15 00:10:11.523 list of free elements. size: 12.486755 MiB 00:10:11.523 element at address: 0x200000400000 with size: 1.999512 MiB 00:10:11.523 element at address: 0x200018e00000 with size: 0.999878 MiB 00:10:11.523 element at address: 0x200019000000 with size: 0.999878 MiB 00:10:11.523 element at address: 0x200003e00000 with size: 0.996277 MiB 00:10:11.523 element at address: 0x200031c00000 with size: 0.994446 MiB 00:10:11.523 element at address: 0x200013800000 with size: 0.978699 MiB 00:10:11.523 element at address: 0x200007000000 with size: 0.959839 MiB 00:10:11.523 element at address: 0x200019200000 with size: 0.936584 MiB 00:10:11.523 element at address: 0x200000200000 with size: 0.836853 MiB 00:10:11.523 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:10:11.523 element at address: 0x20000b200000 with size: 0.489624 MiB 00:10:11.523 element at address: 0x200000800000 with size: 0.487061 MiB 00:10:11.523 element at address: 0x200019400000 with size: 0.485657 MiB 00:10:11.523 element at address: 0x200027e00000 with size: 0.402893 MiB 00:10:11.523 element at address: 0x200003a00000 with size: 0.350952 MiB 00:10:11.523 list of standard malloc elements. size: 199.250671 MiB 00:10:11.523 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:10:11.523 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:10:11.523 element at address: 0x200018efff80 with size: 1.000122 MiB 00:10:11.523 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:10:11.523 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:10:11.523 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:10:11.523 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:10:11.523 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:10:11.523 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:10:11.523 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:10:11.523 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003adb300 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003adb500 with size: 0.000183 MiB 00:10:11.523 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200003affa80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200003affb40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:10:11.524 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e67240 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e67300 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6df00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:10:11.524 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:10:11.524 list of memzone associated elements. size: 602.262573 MiB 00:10:11.524 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:10:11.524 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:10:11.524 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:10:11.524 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:10:11.524 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:10:11.524 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_124747_0 00:10:11.524 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:10:11.524 associated memzone info: size: 48.002930 MiB name: MP_evtpool_124747_0 00:10:11.524 element at address: 0x200003fff380 with size: 48.003052 MiB 00:10:11.524 associated memzone info: size: 48.002930 MiB name: MP_msgpool_124747_0 00:10:11.524 element at address: 0x2000195be940 with size: 20.255554 MiB 00:10:11.524 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:10:11.524 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:10:11.524 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:10:11.524 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:10:11.524 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_124747 00:10:11.524 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:10:11.524 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_124747 00:10:11.524 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:10:11.524 associated memzone info: size: 1.007996 MiB name: MP_evtpool_124747 00:10:11.524 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:10:11.524 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:10:11.525 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:10:11.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:10:11.525 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:10:11.525 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:10:11.525 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:10:11.525 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:10:11.525 element at address: 0x200003eff180 with size: 1.000488 MiB 00:10:11.525 associated memzone info: size: 1.000366 MiB name: RG_ring_0_124747 00:10:11.525 element at address: 0x200003affc00 with size: 1.000488 MiB 00:10:11.525 associated memzone info: size: 1.000366 MiB name: RG_ring_1_124747 00:10:11.525 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:10:11.525 associated memzone info: size: 1.000366 MiB name: RG_ring_4_124747 00:10:11.525 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:10:11.525 associated memzone info: size: 1.000366 MiB name: RG_ring_5_124747 00:10:11.525 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:10:11.525 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_124747 00:10:11.525 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:10:11.525 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:10:11.525 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:10:11.525 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:10:11.525 element at address: 0x20001947c540 with size: 0.250488 MiB 00:10:11.525 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:10:11.525 element at address: 0x200003adf880 with size: 0.125488 MiB 00:10:11.525 associated memzone info: size: 0.125366 MiB name: RG_ring_2_124747 00:10:11.525 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:10:11.525 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:10:11.525 element at address: 0x200027e673c0 with size: 0.023743 MiB 00:10:11.525 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:10:11.525 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:10:11.525 associated memzone info: size: 0.015991 MiB name: RG_ring_3_124747 00:10:11.525 element at address: 0x200027e6d500 with size: 0.002441 MiB 00:10:11.525 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:10:11.525 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:10:11.525 associated memzone info: size: 0.000183 MiB name: MP_msgpool_124747 00:10:11.525 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:10:11.525 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_124747 00:10:11.525 element at address: 0x200027e6dfc0 with size: 0.000305 MiB 00:10:11.525 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:10:11.525 11:52:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:10:11.525 11:52:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 124747 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 124747 ']' 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 124747 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124747 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:11.525 killing process with pid 124747 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124747' 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 124747 00:10:11.525 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 124747 00:10:12.090 00:10:12.090 real 0m1.757s 00:10:12.090 user 0m1.831s 00:10:12.090 sys 0m0.500s 00:10:12.090 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:12.090 11:52:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:10:12.090 ************************************ 00:10:12.090 END TEST dpdk_mem_utility 00:10:12.090 ************************************ 00:10:12.090 11:52:10 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:12.090 11:52:10 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:12.090 11:52:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.090 11:52:10 -- common/autotest_common.sh@10 -- # set +x 00:10:12.090 ************************************ 00:10:12.090 START TEST event 00:10:12.090 ************************************ 00:10:12.090 11:52:10 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:10:12.090 * Looking for test storage... 00:10:12.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:12.090 11:52:10 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:12.090 11:52:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:10:12.090 11:52:10 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:12.090 11:52:10 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:12.090 11:52:10 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:12.090 11:52:10 event -- common/autotest_common.sh@10 -- # set +x 00:10:12.090 ************************************ 00:10:12.090 START TEST event_perf 00:10:12.090 ************************************ 00:10:12.090 11:52:10 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:10:12.090 Running I/O for 1 seconds...[2024-07-21 11:52:10.866368] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:12.090 [2024-07-21 11:52:10.867320] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124829 ] 00:10:12.347 [2024-07-21 11:52:11.053153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.347 [2024-07-21 11:52:11.141979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.347 [2024-07-21 11:52:11.142126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.347 [2024-07-21 11:52:11.142273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.347 [2024-07-21 11:52:11.142277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.718 Running I/O for 1 seconds... 00:10:13.718 lcore 0: 128423 00:10:13.718 lcore 1: 128423 00:10:13.718 lcore 2: 128425 00:10:13.718 lcore 3: 128422 00:10:13.718 done. 00:10:13.718 00:10:13.718 real 0m1.419s 00:10:13.718 user 0m4.194s 00:10:13.718 sys 0m0.120s 00:10:13.718 11:52:12 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:13.718 11:52:12 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 ************************************ 00:10:13.718 END TEST event_perf 00:10:13.718 ************************************ 00:10:13.718 11:52:12 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:13.718 11:52:12 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:13.718 11:52:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:13.718 11:52:12 event -- common/autotest_common.sh@10 -- # set +x 00:10:13.718 ************************************ 00:10:13.718 START TEST event_reactor 00:10:13.718 ************************************ 00:10:13.718 11:52:12 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:10:13.718 [2024-07-21 11:52:12.333332] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:13.718 [2024-07-21 11:52:12.333574] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124880 ] 00:10:13.718 [2024-07-21 11:52:12.498158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.718 [2024-07-21 11:52:12.550843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.093 test_start 00:10:15.093 oneshot 00:10:15.093 tick 100 00:10:15.093 tick 100 00:10:15.093 tick 250 00:10:15.093 tick 100 00:10:15.093 tick 100 00:10:15.093 tick 100 00:10:15.093 tick 250 00:10:15.093 tick 500 00:10:15.093 tick 100 00:10:15.093 tick 100 00:10:15.093 tick 250 00:10:15.093 tick 100 00:10:15.093 tick 100 00:10:15.093 test_end 00:10:15.093 00:10:15.093 real 0m1.354s 00:10:15.093 user 0m1.154s 00:10:15.093 sys 0m0.100s 00:10:15.093 11:52:13 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:15.093 ************************************ 00:10:15.093 END TEST event_reactor 00:10:15.093 ************************************ 00:10:15.093 11:52:13 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:10:15.093 11:52:13 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:15.093 11:52:13 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:15.093 11:52:13 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:15.093 11:52:13 event -- common/autotest_common.sh@10 -- # set +x 00:10:15.093 ************************************ 00:10:15.093 START TEST event_reactor_perf 00:10:15.093 ************************************ 00:10:15.093 11:52:13 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:10:15.093 [2024-07-21 11:52:13.738426] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:15.093 [2024-07-21 11:52:13.738705] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124918 ] 00:10:15.093 [2024-07-21 11:52:13.893601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.351 [2024-07-21 11:52:13.977966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.286 test_start 00:10:16.286 test_end 00:10:16.286 Performance: 339958 events per second 00:10:16.286 00:10:16.286 real 0m1.373s 00:10:16.286 user 0m1.173s 00:10:16.286 sys 0m0.100s 00:10:16.286 11:52:15 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:16.286 ************************************ 00:10:16.286 END TEST event_reactor_perf 00:10:16.286 ************************************ 00:10:16.286 11:52:15 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:10:16.286 11:52:15 event -- event/event.sh@49 -- # uname -s 00:10:16.286 11:52:15 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:10:16.286 11:52:15 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:16.286 11:52:15 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:16.286 11:52:15 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:16.286 11:52:15 event -- common/autotest_common.sh@10 -- # set +x 00:10:16.286 ************************************ 00:10:16.286 START TEST event_scheduler 00:10:16.286 ************************************ 00:10:16.286 11:52:15 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:10:16.544 * Looking for test storage... 00:10:16.544 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:10:16.544 11:52:15 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:10:16.544 11:52:15 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=124991 00:10:16.544 11:52:15 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:10:16.544 11:52:15 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 124991 00:10:16.544 11:52:15 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 124991 ']' 00:10:16.544 11:52:15 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:10:16.544 11:52:15 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.544 11:52:15 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:16.544 11:52:15 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.544 11:52:15 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:16.544 11:52:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:16.544 [2024-07-21 11:52:15.293515] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:16.544 [2024-07-21 11:52:15.293762] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124991 ] 00:10:16.801 [2024-07-21 11:52:15.483884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:16.801 [2024-07-21 11:52:15.574396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.801 [2024-07-21 11:52:15.574553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.801 [2024-07-21 11:52:15.574679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.801 [2024-07-21 11:52:15.574681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.365 11:52:16 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:17.365 11:52:16 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:10:17.365 11:52:16 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:10:17.365 11:52:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.365 11:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:17.365 POWER: Env isn't set yet! 00:10:17.365 POWER: Attempting to initialise ACPI cpufreq power management... 00:10:17.365 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:17.365 POWER: Cannot set governor of lcore 0 to userspace 00:10:17.365 POWER: Attempting to initialise PSTAT power management... 00:10:17.365 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:17.365 POWER: Cannot set governor of lcore 0 to performance 00:10:17.365 POWER: Attempting to initialise AMD PSTATE power management... 00:10:17.365 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:17.365 POWER: Cannot set governor of lcore 0 to userspace 00:10:17.365 POWER: Attempting to initialise CPPC power management... 00:10:17.365 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:10:17.365 POWER: Cannot set governor of lcore 0 to userspace 00:10:17.365 POWER: Attempting to initialise VM power management... 00:10:17.365 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:10:17.365 POWER: Unable to set Power Management Environment for lcore 0 00:10:17.365 [2024-07-21 11:52:16.213662] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:10:17.365 [2024-07-21 11:52:16.213732] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:10:17.365 [2024-07-21 11:52:16.213785] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:10:17.365 [2024-07-21 11:52:16.213874] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:10:17.365 [2024-07-21 11:52:16.213930] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:10:17.365 [2024-07-21 11:52:16.213990] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:10:17.365 11:52:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.365 11:52:16 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:10:17.365 11:52:16 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.365 11:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 [2024-07-21 11:52:16.311114] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:10:17.622 11:52:16 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:10:17.622 11:52:16 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:17.622 11:52:16 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 ************************************ 00:10:17.622 START TEST scheduler_create_thread 00:10:17.622 ************************************ 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 2 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 3 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 4 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 5 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 6 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 7 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 8 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 9 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 10 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:10:17.622 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.623 11:52:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:18.187 11:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.187 11:52:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:10:18.187 11:52:17 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:10:18.187 11:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.187 11:52:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.554 11:52:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.554 00:10:19.554 real 0m1.753s 00:10:19.554 user 0m0.014s 00:10:19.554 sys 0m0.001s 00:10:19.554 ************************************ 00:10:19.554 END TEST scheduler_create_thread 00:10:19.554 ************************************ 00:10:19.554 11:52:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.554 11:52:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:10:19.554 11:52:18 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:10:19.554 11:52:18 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 124991 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 124991 ']' 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 124991 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 124991 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:10:19.554 killing process with pid 124991 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 124991' 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 124991 00:10:19.554 11:52:18 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 124991 00:10:19.811 [2024-07-21 11:52:18.558237] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:10:20.069 ************************************ 00:10:20.069 END TEST event_scheduler 00:10:20.069 ************************************ 00:10:20.069 00:10:20.069 real 0m3.734s 00:10:20.069 user 0m6.422s 00:10:20.069 sys 0m0.395s 00:10:20.069 11:52:18 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:20.069 11:52:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:10:20.069 11:52:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:10:20.069 11:52:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:10:20.069 11:52:18 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:20.069 11:52:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:20.069 11:52:18 event -- common/autotest_common.sh@10 -- # set +x 00:10:20.069 ************************************ 00:10:20.069 START TEST app_repeat 00:10:20.069 ************************************ 00:10:20.069 11:52:18 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:10:20.069 11:52:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:20.069 11:52:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:20.069 11:52:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:10:20.069 11:52:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:20.069 11:52:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:10:20.069 11:52:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:10:20.069 11:52:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:10:20.327 11:52:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=125095 00:10:20.327 11:52:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:10:20.327 11:52:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:10:20.327 Process app_repeat pid: 125095 00:10:20.327 11:52:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 125095' 00:10:20.327 11:52:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:20.327 11:52:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:10:20.327 spdk_app_start Round 0 00:10:20.327 11:52:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 125095 /var/tmp/spdk-nbd.sock 00:10:20.327 11:52:18 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 125095 ']' 00:10:20.327 11:52:18 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:20.327 11:52:18 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:20.327 11:52:18 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:20.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:20.327 11:52:18 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:20.327 11:52:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:20.327 [2024-07-21 11:52:18.969619] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:20.327 [2024-07-21 11:52:18.969974] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125095 ] 00:10:20.327 [2024-07-21 11:52:19.130010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:20.585 [2024-07-21 11:52:19.209291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.585 [2024-07-21 11:52:19.209299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.585 11:52:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:20.585 11:52:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:10:20.585 11:52:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:20.863 Malloc0 00:10:20.863 11:52:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:21.160 Malloc1 00:10:21.160 11:52:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.160 11:52:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:21.418 /dev/nbd0 00:10:21.418 11:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:21.418 11:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:21.418 1+0 records in 00:10:21.418 1+0 records out 00:10:21.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430844 s, 9.5 MB/s 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:21.418 11:52:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:10:21.418 11:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.418 11:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.418 11:52:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:21.676 /dev/nbd1 00:10:21.676 11:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:21.676 11:52:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:21.676 1+0 records in 00:10:21.676 1+0 records out 00:10:21.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297975 s, 13.7 MB/s 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:21.676 11:52:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:10:21.676 11:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:21.676 11:52:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:21.676 11:52:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:21.676 11:52:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:21.676 11:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:21.934 11:52:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:21.934 { 00:10:21.934 "nbd_device": "/dev/nbd0", 00:10:21.934 "bdev_name": "Malloc0" 00:10:21.935 }, 00:10:21.935 { 00:10:21.935 "nbd_device": "/dev/nbd1", 00:10:21.935 "bdev_name": "Malloc1" 00:10:21.935 } 00:10:21.935 ]' 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:21.935 { 00:10:21.935 "nbd_device": "/dev/nbd0", 00:10:21.935 "bdev_name": "Malloc0" 00:10:21.935 }, 00:10:21.935 { 00:10:21.935 "nbd_device": "/dev/nbd1", 00:10:21.935 "bdev_name": "Malloc1" 00:10:21.935 } 00:10:21.935 ]' 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:21.935 /dev/nbd1' 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:21.935 /dev/nbd1' 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:21.935 11:52:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:22.193 256+0 records in 00:10:22.193 256+0 records out 00:10:22.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723354 s, 145 MB/s 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:22.193 256+0 records in 00:10:22.193 256+0 records out 00:10:22.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265991 s, 39.4 MB/s 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:22.193 256+0 records in 00:10:22.193 256+0 records out 00:10:22.193 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0347263 s, 30.2 MB/s 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.193 11:52:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:22.469 11:52:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.470 11:52:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:22.470 11:52:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.727 11:52:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:22.985 11:52:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:22.985 11:52:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:22.985 11:52:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:23.243 11:52:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:23.243 11:52:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:23.500 11:52:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:23.758 [2024-07-21 11:52:22.423652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:23.758 [2024-07-21 11:52:22.496538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.758 [2024-07-21 11:52:22.496546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.758 [2024-07-21 11:52:22.551944] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:23.758 [2024-07-21 11:52:22.552435] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:27.037 spdk_app_start Round 1 00:10:27.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:27.037 11:52:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:27.037 11:52:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:10:27.037 11:52:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 125095 /var/tmp/spdk-nbd.sock 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 125095 ']' 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:27.037 11:52:25 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:10:27.037 11:52:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:27.037 Malloc0 00:10:27.037 11:52:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:27.295 Malloc1 00:10:27.296 11:52:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.296 11:52:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:27.554 /dev/nbd0 00:10:27.554 11:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:27.554 11:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:27.554 1+0 records in 00:10:27.554 1+0 records out 00:10:27.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112514 s, 3.6 MB/s 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:27.554 11:52:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:10:27.554 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.554 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.554 11:52:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:27.812 /dev/nbd1 00:10:27.812 11:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:27.812 11:52:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:27.812 1+0 records in 00:10:27.812 1+0 records out 00:10:27.812 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000789087 s, 5.2 MB/s 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:27.812 11:52:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:10:27.812 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.812 11:52:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:27.812 11:52:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:27.812 11:52:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.812 11:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:28.070 { 00:10:28.070 "nbd_device": "/dev/nbd0", 00:10:28.070 "bdev_name": "Malloc0" 00:10:28.070 }, 00:10:28.070 { 00:10:28.070 "nbd_device": "/dev/nbd1", 00:10:28.070 "bdev_name": "Malloc1" 00:10:28.070 } 00:10:28.070 ]' 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:28.070 { 00:10:28.070 "nbd_device": "/dev/nbd0", 00:10:28.070 "bdev_name": "Malloc0" 00:10:28.070 }, 00:10:28.070 { 00:10:28.070 "nbd_device": "/dev/nbd1", 00:10:28.070 "bdev_name": "Malloc1" 00:10:28.070 } 00:10:28.070 ]' 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:28.070 /dev/nbd1' 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:28.070 /dev/nbd1' 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:28.070 256+0 records in 00:10:28.070 256+0 records out 00:10:28.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044557 s, 235 MB/s 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:28.070 256+0 records in 00:10:28.070 256+0 records out 00:10:28.070 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023906 s, 43.9 MB/s 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:28.070 11:52:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:28.328 256+0 records in 00:10:28.328 256+0 records out 00:10:28.328 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276426 s, 37.9 MB/s 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.328 11:52:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:28.328 11:52:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:28.328 11:52:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:28.328 11:52:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:28.328 11:52:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.328 11:52:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.328 11:52:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:28.586 11:52:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:28.587 11:52:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:28.587 11:52:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:28.587 11:52:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:28.587 11:52:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:28.845 11:52:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:28.845 11:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:28.845 11:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:29.104 11:52:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:29.104 11:52:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:29.363 11:52:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:29.621 [2024-07-21 11:52:28.241755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:29.621 [2024-07-21 11:52:28.286789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.621 [2024-07-21 11:52:28.286797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.621 [2024-07-21 11:52:28.338357] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:29.621 [2024-07-21 11:52:28.338845] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:32.903 spdk_app_start Round 2 00:10:32.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:32.903 11:52:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:10:32.903 11:52:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:10:32.903 11:52:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 125095 /var/tmp/spdk-nbd.sock 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 125095 ']' 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:32.903 11:52:31 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:10:32.903 11:52:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:32.903 Malloc0 00:10:32.903 11:52:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:10:33.160 Malloc1 00:10:33.160 11:52:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.160 11:52:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:10:33.417 /dev/nbd0 00:10:33.417 11:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:33.417 11:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:33.418 1+0 records in 00:10:33.418 1+0 records out 00:10:33.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557854 s, 7.3 MB/s 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:33.418 11:52:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:10:33.418 11:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.418 11:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.418 11:52:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:10:33.675 /dev/nbd1 00:10:33.675 11:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:33.675 11:52:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:10:33.675 11:52:32 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:10:33.675 1+0 records in 00:10:33.676 1+0 records out 00:10:33.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473043 s, 8.7 MB/s 00:10:33.676 11:52:32 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:33.676 11:52:32 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:10:33.676 11:52:32 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:10:33.676 11:52:32 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:10:33.676 11:52:32 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:10:33.676 11:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:33.676 11:52:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:33.676 11:52:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:33.676 11:52:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.676 11:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:33.933 { 00:10:33.933 "nbd_device": "/dev/nbd0", 00:10:33.933 "bdev_name": "Malloc0" 00:10:33.933 }, 00:10:33.933 { 00:10:33.933 "nbd_device": "/dev/nbd1", 00:10:33.933 "bdev_name": "Malloc1" 00:10:33.933 } 00:10:33.933 ]' 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:33.933 { 00:10:33.933 "nbd_device": "/dev/nbd0", 00:10:33.933 "bdev_name": "Malloc0" 00:10:33.933 }, 00:10:33.933 { 00:10:33.933 "nbd_device": "/dev/nbd1", 00:10:33.933 "bdev_name": "Malloc1" 00:10:33.933 } 00:10:33.933 ]' 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:33.933 /dev/nbd1' 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:33.933 /dev/nbd1' 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:10:33.933 256+0 records in 00:10:33.933 256+0 records out 00:10:33.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00827065 s, 127 MB/s 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:33.933 256+0 records in 00:10:33.933 256+0 records out 00:10:33.933 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289381 s, 36.2 MB/s 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:33.933 11:52:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:34.196 256+0 records in 00:10:34.196 256+0 records out 00:10:34.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260055 s, 40.3 MB/s 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.196 11:52:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:34.468 11:52:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:34.727 11:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:34.985 11:52:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:10:34.985 11:52:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:10:35.243 11:52:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:35.501 [2024-07-21 11:52:34.155067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:35.501 [2024-07-21 11:52:34.217556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.501 [2024-07-21 11:52:34.217556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.501 [2024-07-21 11:52:34.277202] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:35.501 [2024-07-21 11:52:34.277657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:38.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:38.780 11:52:36 event.app_repeat -- event/event.sh@38 -- # waitforlisten 125095 /var/tmp/spdk-nbd.sock 00:10:38.780 11:52:36 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 125095 ']' 00:10:38.780 11:52:36 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:38.780 11:52:36 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:38.780 11:52:36 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:38.780 11:52:36 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:38.780 11:52:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:10:38.780 11:52:37 event.app_repeat -- event/event.sh@39 -- # killprocess 125095 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 125095 ']' 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 125095 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125095 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125095' 00:10:38.780 killing process with pid 125095 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@965 -- # kill 125095 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@970 -- # wait 125095 00:10:38.780 spdk_app_start is called in Round 0. 00:10:38.780 Shutdown signal received, stop current app iteration 00:10:38.780 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:10:38.780 spdk_app_start is called in Round 1. 00:10:38.780 Shutdown signal received, stop current app iteration 00:10:38.780 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:10:38.780 spdk_app_start is called in Round 2. 00:10:38.780 Shutdown signal received, stop current app iteration 00:10:38.780 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:10:38.780 spdk_app_start is called in Round 3. 00:10:38.780 Shutdown signal received, stop current app iteration 00:10:38.780 ************************************ 00:10:38.780 END TEST app_repeat 00:10:38.780 ************************************ 00:10:38.780 11:52:37 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:38.780 11:52:37 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:38.780 00:10:38.780 real 0m18.568s 00:10:38.780 user 0m41.892s 00:10:38.780 sys 0m2.793s 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:38.780 11:52:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:38.780 11:52:37 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:38.780 11:52:37 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:38.780 11:52:37 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:38.781 11:52:37 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.781 11:52:37 event -- common/autotest_common.sh@10 -- # set +x 00:10:38.781 ************************************ 00:10:38.781 START TEST cpu_locks 00:10:38.781 ************************************ 00:10:38.781 11:52:37 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:38.781 * Looking for test storage... 00:10:38.781 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:38.781 11:52:37 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:38.781 11:52:37 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:38.781 11:52:37 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:38.781 11:52:37 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:38.781 11:52:37 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:38.781 11:52:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.781 11:52:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.038 ************************************ 00:10:39.038 START TEST default_locks 00:10:39.038 ************************************ 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=125603 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 125603 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 125603 ']' 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:39.038 11:52:37 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:39.038 [2024-07-21 11:52:37.721827] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:39.038 [2024-07-21 11:52:37.722286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125603 ] 00:10:39.038 [2024-07-21 11:52:37.883048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.296 [2024-07-21 11:52:37.963422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.861 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:39.861 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:10:39.861 11:52:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 125603 00:10:39.861 11:52:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 125603 00:10:39.861 11:52:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 125603 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 125603 ']' 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 125603 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125603 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125603' 00:10:40.119 killing process with pid 125603 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 125603 00:10:40.119 11:52:38 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 125603 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 125603 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125603 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 125603 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 125603 ']' 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (125603) - No such process 00:10:40.684 ERROR: process (pid: 125603) is no longer running 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:40.684 00:10:40.684 real 0m1.723s 00:10:40.684 user 0m1.785s 00:10:40.684 sys 0m0.577s 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:40.684 11:52:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.684 ************************************ 00:10:40.684 END TEST default_locks 00:10:40.684 ************************************ 00:10:40.684 11:52:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:40.684 11:52:39 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:40.684 11:52:39 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:40.684 11:52:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:40.684 ************************************ 00:10:40.684 START TEST default_locks_via_rpc 00:10:40.684 ************************************ 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=125658 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 125658 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 125658 ']' 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:40.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:40.684 11:52:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.684 [2024-07-21 11:52:39.496357] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:40.684 [2024-07-21 11:52:39.496789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125658 ] 00:10:40.942 [2024-07-21 11:52:39.661776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.942 [2024-07-21 11:52:39.724529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 125658 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 125658 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 125658 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 125658 ']' 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 125658 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125658 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125658' 00:10:41.875 killing process with pid 125658 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 125658 00:10:41.875 11:52:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 125658 00:10:42.441 00:10:42.441 real 0m1.708s 00:10:42.441 user 0m1.785s 00:10:42.441 sys 0m0.558s 00:10:42.441 11:52:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:42.441 11:52:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:42.441 ************************************ 00:10:42.441 END TEST default_locks_via_rpc 00:10:42.441 ************************************ 00:10:42.441 11:52:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:42.441 11:52:41 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:42.441 11:52:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:42.441 11:52:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:42.441 ************************************ 00:10:42.441 START TEST non_locking_app_on_locked_coremask 00:10:42.441 ************************************ 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=125712 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 125712 /var/tmp/spdk.sock 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125712 ']' 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:42.441 11:52:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:42.441 [2024-07-21 11:52:41.261995] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:42.441 [2024-07-21 11:52:41.262226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125712 ] 00:10:42.704 [2024-07-21 11:52:41.427564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.704 [2024-07-21 11:52:41.514236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=125735 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 125735 /var/tmp/spdk2.sock 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125735 ']' 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:43.640 11:52:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:43.640 [2024-07-21 11:52:42.288878] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:43.640 [2024-07-21 11:52:42.289161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125735 ] 00:10:43.640 [2024-07-21 11:52:42.452146] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:43.640 [2024-07-21 11:52:42.452226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.898 [2024-07-21 11:52:42.631096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.463 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:44.463 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:10:44.463 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 125712 00:10:44.463 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125712 00:10:44.463 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 125712 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125712 ']' 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 125712 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125712 00:10:45.026 killing process with pid 125712 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125712' 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 125712 00:10:45.026 11:52:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 125712 00:10:45.957 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 125735 00:10:45.957 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125735 ']' 00:10:45.957 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 125735 00:10:45.957 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:10:45.957 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:45.957 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125735 00:10:45.958 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:45.958 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:45.958 killing process with pid 125735 00:10:45.958 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125735' 00:10:45.958 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 125735 00:10:45.958 11:52:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 125735 00:10:46.525 00:10:46.525 real 0m3.900s 00:10:46.525 user 0m4.292s 00:10:46.525 sys 0m1.170s 00:10:46.525 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:46.525 11:52:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:46.525 ************************************ 00:10:46.525 END TEST non_locking_app_on_locked_coremask 00:10:46.525 ************************************ 00:10:46.525 11:52:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:46.525 11:52:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:46.525 11:52:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:46.525 11:52:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:46.525 ************************************ 00:10:46.525 START TEST locking_app_on_unlocked_coremask 00:10:46.526 ************************************ 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=125809 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 125809 /var/tmp/spdk.sock 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125809 ']' 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:46.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:46.526 11:52:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:46.526 [2024-07-21 11:52:45.225332] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:46.526 [2024-07-21 11:52:45.225563] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125809 ] 00:10:46.783 [2024-07-21 11:52:45.391528] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:46.783 [2024-07-21 11:52:45.391635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.783 [2024-07-21 11:52:45.474931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=125830 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 125830 /var/tmp/spdk2.sock 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125830 ']' 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:47.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:47.348 11:52:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:47.606 [2024-07-21 11:52:46.276442] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:47.606 [2024-07-21 11:52:46.276700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125830 ] 00:10:47.606 [2024-07-21 11:52:46.447415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.869 [2024-07-21 11:52:46.606436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.473 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:48.473 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:10:48.473 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 125830 00:10:48.473 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125830 00:10:48.473 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:49.038 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 125809 00:10:49.038 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125809 ']' 00:10:49.038 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 125809 00:10:49.038 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:10:49.039 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:49.039 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125809 00:10:49.039 killing process with pid 125809 00:10:49.039 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:49.039 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:49.039 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125809' 00:10:49.039 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 125809 00:10:49.039 11:52:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 125809 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 125830 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125830 ']' 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 125830 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125830 00:10:49.972 killing process with pid 125830 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125830' 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 125830 00:10:49.972 11:52:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 125830 00:10:50.230 00:10:50.230 real 0m3.925s 00:10:50.230 user 0m4.244s 00:10:50.230 sys 0m1.245s 00:10:50.230 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:50.230 11:52:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.230 ************************************ 00:10:50.230 END TEST locking_app_on_unlocked_coremask 00:10:50.230 ************************************ 00:10:50.487 11:52:49 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:50.487 11:52:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:50.487 11:52:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:50.487 11:52:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:50.487 ************************************ 00:10:50.487 START TEST locking_app_on_locked_coremask 00:10:50.487 ************************************ 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=125899 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 125899 /var/tmp/spdk.sock 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125899 ']' 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:50.487 11:52:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:50.487 [2024-07-21 11:52:49.200116] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:50.487 [2024-07-21 11:52:49.200365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125899 ] 00:10:50.744 [2024-07-21 11:52:49.366512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.744 [2024-07-21 11:52:49.446411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=125920 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 125920 /var/tmp/spdk2.sock 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 125920 /var/tmp/spdk2.sock 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 125920 /var/tmp/spdk2.sock 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 125920 ']' 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:51.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:51.308 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:51.565 [2024-07-21 11:52:50.239577] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:51.565 [2024-07-21 11:52:50.239872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125920 ] 00:10:51.565 [2024-07-21 11:52:50.405566] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 125899 has claimed it. 00:10:51.565 [2024-07-21 11:52:50.405694] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:52.128 ERROR: process (pid: 125920) is no longer running 00:10:52.128 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (125920) - No such process 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 125899 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125899 00:10:52.128 11:52:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 125899 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 125899 ']' 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 125899 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125899 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:52.386 killing process with pid 125899 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125899' 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 125899 00:10:52.386 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 125899 00:10:52.952 00:10:52.952 real 0m2.634s 00:10:52.952 user 0m2.933s 00:10:52.952 sys 0m0.766s 00:10:52.952 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:52.952 11:52:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:52.952 ************************************ 00:10:52.952 END TEST locking_app_on_locked_coremask 00:10:52.952 ************************************ 00:10:52.952 11:52:51 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:52.952 11:52:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:52.952 11:52:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:52.952 11:52:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:52.952 ************************************ 00:10:52.952 START TEST locking_overlapped_coremask 00:10:52.952 ************************************ 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=125979 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 125979 /var/tmp/spdk.sock 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 125979 ']' 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:53.211 11:52:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:53.211 [2024-07-21 11:52:51.889363] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:53.211 [2024-07-21 11:52:51.889600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125979 ] 00:10:53.211 [2024-07-21 11:52:52.066804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:53.469 [2024-07-21 11:52:52.165909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.469 [2024-07-21 11:52:52.166071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.469 [2024-07-21 11:52:52.166080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=126002 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 126002 /var/tmp/spdk2.sock 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 126002 /var/tmp/spdk2.sock 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 126002 /var/tmp/spdk2.sock 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 126002 ']' 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:54.036 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:54.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:54.037 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:54.037 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:54.037 11:52:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:54.295 [2024-07-21 11:52:52.944753] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:54.295 [2024-07-21 11:52:52.944992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126002 ] 00:10:54.295 [2024-07-21 11:52:53.140164] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 125979 has claimed it. 00:10:54.295 [2024-07-21 11:52:53.140272] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:54.862 ERROR: process (pid: 126002) is no longer running 00:10:54.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (126002) - No such process 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 125979 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 125979 ']' 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 125979 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 125979 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 125979' 00:10:54.862 killing process with pid 125979 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 125979 00:10:54.862 11:52:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 125979 00:10:55.427 00:10:55.427 real 0m2.474s 00:10:55.427 user 0m6.513s 00:10:55.427 sys 0m0.706s 00:10:55.427 11:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:55.427 11:52:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:55.427 ************************************ 00:10:55.427 END TEST locking_overlapped_coremask 00:10:55.427 ************************************ 00:10:55.686 11:52:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:55.686 11:52:54 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:55.686 11:52:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:55.686 11:52:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:55.686 ************************************ 00:10:55.686 START TEST locking_overlapped_coremask_via_rpc 00:10:55.686 ************************************ 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=126047 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 126047 /var/tmp/spdk.sock 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 126047 ']' 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:55.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:55.686 11:52:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.686 [2024-07-21 11:52:54.420021] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:55.686 [2024-07-21 11:52:54.421091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126047 ] 00:10:55.944 [2024-07-21 11:52:54.599461] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:55.944 [2024-07-21 11:52:54.599533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:55.944 [2024-07-21 11:52:54.723255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.944 [2024-07-21 11:52:54.723413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.944 [2024-07-21 11:52:54.723421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=126070 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 126070 /var/tmp/spdk2.sock 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 126070 ']' 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:56.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:56.522 11:52:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:56.780 [2024-07-21 11:52:55.436953] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:56.780 [2024-07-21 11:52:55.437159] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126070 ] 00:10:56.780 [2024-07-21 11:52:55.614751] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:56.780 [2024-07-21 11:52:55.614829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:57.038 [2024-07-21 11:52:55.785115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.038 [2024-07-21 11:52:55.785215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:10:57.038 [2024-07-21 11:52:55.785218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.604 [2024-07-21 11:52:56.438820] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 126047 has claimed it. 00:10:57.604 request: 00:10:57.604 { 00:10:57.604 "method": "framework_enable_cpumask_locks", 00:10:57.604 "req_id": 1 00:10:57.604 } 00:10:57.604 Got JSON-RPC error response 00:10:57.604 response: 00:10:57.604 { 00:10:57.604 "code": -32603, 00:10:57.604 "message": "Failed to claim CPU core: 2" 00:10:57.604 } 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 126047 /var/tmp/spdk.sock 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 126047 ']' 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:57.604 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.605 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:57.605 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 126070 /var/tmp/spdk2.sock 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 126070 ']' 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:57.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:57.870 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:58.141 00:10:58.141 real 0m2.596s 00:10:58.141 user 0m1.355s 00:10:58.141 sys 0m0.165s 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:58.141 11:52:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.141 ************************************ 00:10:58.141 END TEST locking_overlapped_coremask_via_rpc 00:10:58.141 ************************************ 00:10:58.141 11:52:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:58.141 11:52:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126047 ]] 00:10:58.141 11:52:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126047 00:10:58.141 11:52:56 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 126047 ']' 00:10:58.141 11:52:56 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 126047 00:10:58.141 11:52:56 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:10:58.141 11:52:56 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:58.141 11:52:56 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 126047 00:10:58.141 11:52:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:58.141 11:52:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:58.141 killing process with pid 126047 00:10:58.141 11:52:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 126047' 00:10:58.141 11:52:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 126047 00:10:58.141 11:52:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 126047 00:10:59.073 11:52:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126070 ]] 00:10:59.073 11:52:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126070 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 126070 ']' 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 126070 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 126070 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:10:59.073 killing process with pid 126070 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 126070' 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 126070 00:10:59.073 11:52:57 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 126070 00:10:59.639 11:52:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:59.639 11:52:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:59.639 11:52:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126047 ]] 00:10:59.639 11:52:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126047 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 126047 ']' 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 126047 00:10:59.639 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (126047) - No such process 00:10:59.639 Process with pid 126047 is not found 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 126047 is not found' 00:10:59.639 11:52:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126070 ]] 00:10:59.639 11:52:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126070 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 126070 ']' 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 126070 00:10:59.639 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (126070) - No such process 00:10:59.639 Process with pid 126070 is not found 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 126070 is not found' 00:10:59.639 11:52:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:59.639 00:10:59.639 real 0m20.798s 00:10:59.639 user 0m36.673s 00:10:59.639 sys 0m6.265s 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:59.639 ************************************ 00:10:59.639 END TEST cpu_locks 00:10:59.639 11:52:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:59.639 ************************************ 00:10:59.639 00:10:59.639 real 0m47.643s 00:10:59.639 user 1m31.755s 00:10:59.639 sys 0m9.902s 00:10:59.639 11:52:58 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:59.639 11:52:58 event -- common/autotest_common.sh@10 -- # set +x 00:10:59.639 ************************************ 00:10:59.639 END TEST event 00:10:59.640 ************************************ 00:10:59.640 11:52:58 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:59.640 11:52:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:59.640 11:52:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:59.640 11:52:58 -- common/autotest_common.sh@10 -- # set +x 00:10:59.640 ************************************ 00:10:59.640 START TEST thread 00:10:59.640 ************************************ 00:10:59.640 11:52:58 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:59.898 * Looking for test storage... 00:10:59.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:59.898 11:52:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:59.898 11:52:58 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:10:59.898 11:52:58 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:59.898 11:52:58 thread -- common/autotest_common.sh@10 -- # set +x 00:10:59.898 ************************************ 00:10:59.898 START TEST thread_poller_perf 00:10:59.898 ************************************ 00:10:59.898 11:52:58 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:59.898 [2024-07-21 11:52:58.571497] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:59.898 [2024-07-21 11:52:58.572626] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126209 ] 00:10:59.898 [2024-07-21 11:52:58.740315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.157 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:11:00.157 [2024-07-21 11:52:58.846144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.533 ====================================== 00:11:01.533 busy:2211687494 (cyc) 00:11:01.533 total_run_count: 324000 00:11:01.533 tsc_hz: 2200000000 (cyc) 00:11:01.533 ====================================== 00:11:01.533 poller_cost: 6826 (cyc), 3102 (nsec) 00:11:01.533 00:11:01.533 real 0m1.467s 00:11:01.533 user 0m1.229s 00:11:01.533 sys 0m0.136s 00:11:01.533 11:53:00 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:01.533 11:53:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:01.533 ************************************ 00:11:01.533 END TEST thread_poller_perf 00:11:01.533 ************************************ 00:11:01.533 11:53:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:01.533 11:53:00 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:11:01.533 11:53:00 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:01.533 11:53:00 thread -- common/autotest_common.sh@10 -- # set +x 00:11:01.533 ************************************ 00:11:01.533 START TEST thread_poller_perf 00:11:01.533 ************************************ 00:11:01.533 11:53:00 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:11:01.533 [2024-07-21 11:53:00.087913] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:01.533 [2024-07-21 11:53:00.088258] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126254 ] 00:11:01.533 [2024-07-21 11:53:00.265336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.533 [2024-07-21 11:53:00.393201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.533 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:11:02.909 ====================================== 00:11:02.909 busy:2204337254 (cyc) 00:11:02.909 total_run_count: 4148000 00:11:02.909 tsc_hz: 2200000000 (cyc) 00:11:02.909 ====================================== 00:11:02.909 poller_cost: 531 (cyc), 241 (nsec) 00:11:02.909 00:11:02.909 real 0m1.470s 00:11:02.909 user 0m1.223s 00:11:02.909 sys 0m0.147s 00:11:02.909 11:53:01 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:02.909 11:53:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:11:02.909 ************************************ 00:11:02.909 END TEST thread_poller_perf 00:11:02.909 ************************************ 00:11:02.909 11:53:01 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:11:02.909 11:53:01 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:02.909 11:53:01 thread -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:02.909 11:53:01 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.909 11:53:01 thread -- common/autotest_common.sh@10 -- # set +x 00:11:02.909 ************************************ 00:11:02.909 START TEST thread_spdk_lock 00:11:02.909 ************************************ 00:11:02.909 11:53:01 thread.thread_spdk_lock -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:11:02.909 [2024-07-21 11:53:01.613284] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:02.909 [2024-07-21 11:53:01.613547] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126295 ] 00:11:03.168 [2024-07-21 11:53:01.786737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:03.168 [2024-07-21 11:53:01.892711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.168 [2024-07-21 11:53:01.892711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.105 [2024-07-21 11:53:02.622267] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 961:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:04.105 [2024-07-21 11:53:02.623579] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3072:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:11:04.105 [2024-07-21 11:53:02.623764] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3027:sspin_stacks_print: *ERROR*: spinlock 0x555ea8ca6500 00:11:04.105 [2024-07-21 11:53:02.625610] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:04.105 [2024-07-21 11:53:02.625835] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1022:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:04.105 [2024-07-21 11:53:02.626035] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 856:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:11:04.105 Starting test contend 00:11:04.105 Worker Delay Wait us Hold us Total us 00:11:04.105 0 3 134804 225824 360629 00:11:04.105 1 5 33401 346013 379415 00:11:04.105 PASS test contend 00:11:04.105 Starting test hold_by_poller 00:11:04.105 PASS test hold_by_poller 00:11:04.105 Starting test hold_by_message 00:11:04.105 PASS test hold_by_message 00:11:04.105 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:11:04.105 100014 assertions passed 00:11:04.105 0 assertions failed 00:11:04.105 00:11:04.105 real 0m1.166s 00:11:04.105 user 0m1.683s 00:11:04.105 sys 0m0.113s 00:11:04.105 11:53:02 thread.thread_spdk_lock -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.105 11:53:02 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 ************************************ 00:11:04.105 END TEST thread_spdk_lock 00:11:04.105 ************************************ 00:11:04.105 00:11:04.105 real 0m4.346s 00:11:04.105 user 0m4.259s 00:11:04.105 sys 0m0.516s 00:11:04.105 11:53:02 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:04.105 11:53:02 thread -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 ************************************ 00:11:04.105 END TEST thread 00:11:04.105 ************************************ 00:11:04.105 11:53:02 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:04.105 11:53:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:04.105 11:53:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.105 11:53:02 -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 ************************************ 00:11:04.105 START TEST accel 00:11:04.105 ************************************ 00:11:04.105 11:53:02 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:11:04.105 * Looking for test storage... 00:11:04.105 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:04.105 11:53:02 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:11:04.105 11:53:02 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:11:04.105 11:53:02 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:04.105 11:53:02 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=126377 00:11:04.105 11:53:02 accel -- accel/accel.sh@63 -- # waitforlisten 126377 00:11:04.105 11:53:02 accel -- common/autotest_common.sh@827 -- # '[' -z 126377 ']' 00:11:04.105 11:53:02 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.105 11:53:02 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:04.105 11:53:02 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.105 11:53:02 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:04.105 11:53:02 accel -- common/autotest_common.sh@10 -- # set +x 00:11:04.105 11:53:02 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:11:04.105 11:53:02 accel -- accel/accel.sh@61 -- # build_accel_config 00:11:04.105 11:53:02 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:04.105 11:53:02 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:04.105 11:53:02 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:04.105 11:53:02 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:04.105 11:53:02 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:04.105 11:53:02 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:04.105 11:53:02 accel -- accel/accel.sh@41 -- # jq -r . 00:11:04.364 [2024-07-21 11:53:03.007109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:04.364 [2024-07-21 11:53:03.007340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126377 ] 00:11:04.364 [2024-07-21 11:53:03.173182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.622 [2024-07-21 11:53:03.275163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.187 11:53:03 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:05.187 11:53:03 accel -- common/autotest_common.sh@860 -- # return 0 00:11:05.187 11:53:03 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:11:05.187 11:53:03 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:11:05.187 11:53:03 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:11:05.187 11:53:03 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:11:05.187 11:53:03 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:11:05.187 11:53:04 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:11:05.187 11:53:04 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:11:05.187 11:53:04 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.187 11:53:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:05.187 11:53:04 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.187 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.187 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.187 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.444 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.444 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.444 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.444 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.444 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.444 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # IFS== 00:11:05.444 11:53:04 accel -- accel/accel.sh@72 -- # read -r opc module 00:11:05.444 11:53:04 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:11:05.444 11:53:04 accel -- accel/accel.sh@75 -- # killprocess 126377 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@946 -- # '[' -z 126377 ']' 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@950 -- # kill -0 126377 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@951 -- # uname 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 126377 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 126377' 00:11:05.444 killing process with pid 126377 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@965 -- # kill 126377 00:11:05.444 11:53:04 accel -- common/autotest_common.sh@970 -- # wait 126377 00:11:06.080 11:53:04 accel -- accel/accel.sh@76 -- # trap - ERR 00:11:06.080 11:53:04 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:11:06.080 11:53:04 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:06.080 11:53:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.080 11:53:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:06.080 11:53:04 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:11:06.080 11:53:04 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:11:06.080 11:53:04 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.080 11:53:04 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:11:06.080 11:53:04 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:11:06.080 11:53:04 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:06.080 11:53:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.080 11:53:04 accel -- common/autotest_common.sh@10 -- # set +x 00:11:06.080 ************************************ 00:11:06.080 START TEST accel_missing_filename 00:11:06.080 ************************************ 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.080 11:53:04 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:11:06.080 11:53:04 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:11:06.080 [2024-07-21 11:53:04.877840] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:06.080 [2024-07-21 11:53:04.878103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126449 ] 00:11:06.338 [2024-07-21 11:53:05.041184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:06.338 [2024-07-21 11:53:05.151196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:06.595 [2024-07-21 11:53:05.232172] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:06.595 [2024-07-21 11:53:05.361938] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:06.852 A filename is required. 00:11:06.852 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:11:06.852 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:06.852 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:11:06.852 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:11:06.852 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:11:06.852 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:06.852 00:11:06.852 real 0m0.659s 00:11:06.853 user 0m0.383s 00:11:06.853 sys 0m0.215s 00:11:06.853 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.853 11:53:05 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:11:06.853 ************************************ 00:11:06.853 END TEST accel_missing_filename 00:11:06.853 ************************************ 00:11:06.853 11:53:05 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.853 11:53:05 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:11:06.853 11:53:05 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.853 11:53:05 accel -- common/autotest_common.sh@10 -- # set +x 00:11:06.853 ************************************ 00:11:06.853 START TEST accel_compress_verify 00:11:06.853 ************************************ 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:06.853 11:53:05 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:06.853 11:53:05 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:11:06.853 [2024-07-21 11:53:05.590304] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:06.853 [2024-07-21 11:53:05.590819] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126481 ] 00:11:07.109 [2024-07-21 11:53:05.759461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:07.109 [2024-07-21 11:53:05.873509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.109 [2024-07-21 11:53:05.959904] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:07.367 [2024-07-21 11:53:06.085150] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:11:07.367 00:11:07.367 Compression does not support the verify option, aborting. 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:07.367 00:11:07.367 real 0m0.662s 00:11:07.367 user 0m0.389s 00:11:07.367 sys 0m0.214s 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.367 ************************************ 00:11:07.367 11:53:06 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:11:07.367 END TEST accel_compress_verify 00:11:07.367 ************************************ 00:11:07.624 11:53:06 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:11:07.624 11:53:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:07.624 11:53:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.624 11:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:11:07.624 ************************************ 00:11:07.624 START TEST accel_wrong_workload 00:11:07.624 ************************************ 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:11:07.624 11:53:06 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:11:07.624 Unsupported workload type: foobar 00:11:07.624 [2024-07-21 11:53:06.301707] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:11:07.624 accel_perf options: 00:11:07.624 [-h help message] 00:11:07.624 [-q queue depth per core] 00:11:07.624 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:07.624 [-T number of threads per core 00:11:07.624 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:07.624 [-t time in seconds] 00:11:07.624 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:07.624 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:07.624 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:07.624 [-l for compress/decompress workloads, name of uncompressed input file 00:11:07.624 [-S for crc32c workload, use this seed value (default 0) 00:11:07.624 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:07.624 [-f for fill workload, use this BYTE value (default 255) 00:11:07.624 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:07.624 [-y verify result if this switch is on] 00:11:07.624 [-a tasks to allocate per core (default: same value as -q)] 00:11:07.624 Can be used to spread operations across a wider range of memory. 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:07.624 00:11:07.624 real 0m0.058s 00:11:07.624 user 0m0.089s 00:11:07.624 sys 0m0.024s 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.624 11:53:06 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:11:07.624 ************************************ 00:11:07.624 END TEST accel_wrong_workload 00:11:07.624 ************************************ 00:11:07.624 11:53:06 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:11:07.624 11:53:06 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:11:07.624 11:53:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.624 11:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:11:07.624 ************************************ 00:11:07.624 START TEST accel_negative_buffers 00:11:07.624 ************************************ 00:11:07.624 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:11:07.624 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:11:07.624 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:11:07.624 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:11:07.624 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.624 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:11:07.624 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:07.625 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:11:07.625 11:53:06 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:11:07.625 -x option must be non-negative. 00:11:07.625 [2024-07-21 11:53:06.413987] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:11:07.625 accel_perf options: 00:11:07.625 [-h help message] 00:11:07.625 [-q queue depth per core] 00:11:07.625 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:11:07.625 [-T number of threads per core 00:11:07.625 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:11:07.625 [-t time in seconds] 00:11:07.625 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:11:07.625 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:11:07.625 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:11:07.625 [-l for compress/decompress workloads, name of uncompressed input file 00:11:07.625 [-S for crc32c workload, use this seed value (default 0) 00:11:07.625 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:11:07.625 [-f for fill workload, use this BYTE value (default 255) 00:11:07.625 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:11:07.625 [-y verify result if this switch is on] 00:11:07.625 [-a tasks to allocate per core (default: same value as -q)] 00:11:07.625 Can be used to spread operations across a wider range of memory. 00:11:07.625 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:11:07.625 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:07.625 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:07.625 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:07.625 00:11:07.625 real 0m0.059s 00:11:07.625 user 0m0.079s 00:11:07.625 sys 0m0.033s 00:11:07.625 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:07.625 11:53:06 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:11:07.625 ************************************ 00:11:07.625 END TEST accel_negative_buffers 00:11:07.625 ************************************ 00:11:07.625 11:53:06 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:11:07.625 11:53:06 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:11:07.625 11:53:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:07.625 11:53:06 accel -- common/autotest_common.sh@10 -- # set +x 00:11:07.883 ************************************ 00:11:07.883 START TEST accel_crc32c 00:11:07.883 ************************************ 00:11:07.883 11:53:06 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:07.883 11:53:06 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:07.883 [2024-07-21 11:53:06.527543] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:07.883 [2024-07-21 11:53:06.527816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126564 ] 00:11:07.883 [2024-07-21 11:53:06.695493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.140 [2024-07-21 11:53:06.798493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:08.140 11:53:06 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:09.513 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:09.514 11:53:08 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:09.514 00:11:09.514 real 0m1.643s 00:11:09.514 user 0m1.390s 00:11:09.514 sys 0m0.188s 00:11:09.514 11:53:08 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:09.514 11:53:08 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:09.514 ************************************ 00:11:09.514 END TEST accel_crc32c 00:11:09.514 ************************************ 00:11:09.514 11:53:08 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:11:09.514 11:53:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:11:09.514 11:53:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:09.514 11:53:08 accel -- common/autotest_common.sh@10 -- # set +x 00:11:09.514 ************************************ 00:11:09.514 START TEST accel_crc32c_C2 00:11:09.514 ************************************ 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:09.514 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:09.514 [2024-07-21 11:53:08.228912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:09.514 [2024-07-21 11:53:08.229165] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126615 ] 00:11:09.772 [2024-07-21 11:53:08.390885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.772 [2024-07-21 11:53:08.491684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.772 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:09.773 11:53:08 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:11.145 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:11.146 00:11:11.146 real 0m1.637s 00:11:11.146 user 0m1.390s 00:11:11.146 sys 0m0.177s 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:11.146 11:53:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 ************************************ 00:11:11.146 END TEST accel_crc32c_C2 00:11:11.146 ************************************ 00:11:11.146 11:53:09 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:11:11.146 11:53:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:11.146 11:53:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:11.146 11:53:09 accel -- common/autotest_common.sh@10 -- # set +x 00:11:11.146 ************************************ 00:11:11.146 START TEST accel_copy 00:11:11.146 ************************************ 00:11:11.146 11:53:09 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:11.146 11:53:09 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:11:11.146 [2024-07-21 11:53:09.914096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:11.146 [2024-07-21 11:53:09.914665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126654 ] 00:11:11.403 [2024-07-21 11:53:10.081866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.403 [2024-07-21 11:53:10.190661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:11.660 11:53:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:11:13.030 11:53:11 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:13.030 00:11:13.030 real 0m1.654s 00:11:13.030 user 0m1.383s 00:11:13.030 sys 0m0.199s 00:11:13.030 11:53:11 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:13.030 11:53:11 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:11:13.030 ************************************ 00:11:13.030 END TEST accel_copy 00:11:13.030 ************************************ 00:11:13.030 11:53:11 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:13.030 11:53:11 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:13.030 11:53:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:13.030 11:53:11 accel -- common/autotest_common.sh@10 -- # set +x 00:11:13.030 ************************************ 00:11:13.030 START TEST accel_fill 00:11:13.030 ************************************ 00:11:13.030 11:53:11 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:11:13.030 11:53:11 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:11:13.030 [2024-07-21 11:53:11.619721] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:13.030 [2024-07-21 11:53:11.620020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126706 ] 00:11:13.030 [2024-07-21 11:53:11.783778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.030 [2024-07-21 11:53:11.883103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:13.287 11:53:11 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:11:14.660 11:53:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:14.660 00:11:14.660 real 0m1.636s 00:11:14.660 user 0m1.375s 00:11:14.660 sys 0m0.199s 00:11:14.660 11:53:13 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.660 ************************************ 00:11:14.660 END TEST accel_fill 00:11:14.660 ************************************ 00:11:14.660 11:53:13 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:11:14.660 11:53:13 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:11:14.660 11:53:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:14.660 11:53:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.660 11:53:13 accel -- common/autotest_common.sh@10 -- # set +x 00:11:14.660 ************************************ 00:11:14.660 START TEST accel_copy_crc32c 00:11:14.660 ************************************ 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:11:14.660 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:11:14.660 [2024-07-21 11:53:13.314677] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:14.660 [2024-07-21 11:53:13.315116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126754 ] 00:11:14.660 [2024-07-21 11:53:13.486555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.919 [2024-07-21 11:53:13.598509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:14.919 11:53:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:16.291 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:16.292 00:11:16.292 real 0m1.667s 00:11:16.292 user 0m1.394s 00:11:16.292 sys 0m0.197s 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:16.292 11:53:14 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:11:16.292 ************************************ 00:11:16.292 END TEST accel_copy_crc32c 00:11:16.292 ************************************ 00:11:16.292 11:53:14 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:11:16.292 11:53:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:11:16.292 11:53:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:16.292 11:53:14 accel -- common/autotest_common.sh@10 -- # set +x 00:11:16.292 ************************************ 00:11:16.292 START TEST accel_copy_crc32c_C2 00:11:16.292 ************************************ 00:11:16.292 11:53:14 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:11:16.292 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:11:16.292 [2024-07-21 11:53:15.035168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:16.292 [2024-07-21 11:53:15.035410] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126798 ] 00:11:16.550 [2024-07-21 11:53:15.201542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.550 [2024-07-21 11:53:15.299353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:11:16.550 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:16.551 11:53:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:17.926 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:17.927 00:11:17.927 real 0m1.634s 00:11:17.927 user 0m1.385s 00:11:17.927 sys 0m0.190s 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:17.927 11:53:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:11:17.927 ************************************ 00:11:17.927 END TEST accel_copy_crc32c_C2 00:11:17.927 ************************************ 00:11:17.927 11:53:16 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:11:17.927 11:53:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:17.927 11:53:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:17.927 11:53:16 accel -- common/autotest_common.sh@10 -- # set +x 00:11:17.927 ************************************ 00:11:17.927 START TEST accel_dualcast 00:11:17.927 ************************************ 00:11:17.927 11:53:16 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:11:17.927 11:53:16 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:11:17.927 [2024-07-21 11:53:16.723422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:17.927 [2024-07-21 11:53:16.723598] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126844 ] 00:11:18.185 [2024-07-21 11:53:16.873592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.185 [2024-07-21 11:53:16.977621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.442 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:18.443 11:53:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:11:19.822 11:53:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:19.822 00:11:19.822 real 0m1.625s 00:11:19.822 user 0m1.389s 00:11:19.822 sys 0m0.193s 00:11:19.822 11:53:18 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:19.822 11:53:18 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:11:19.822 ************************************ 00:11:19.822 END TEST accel_dualcast 00:11:19.822 ************************************ 00:11:19.822 11:53:18 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:11:19.822 11:53:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:19.822 11:53:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.822 11:53:18 accel -- common/autotest_common.sh@10 -- # set +x 00:11:19.822 ************************************ 00:11:19.822 START TEST accel_compare 00:11:19.822 ************************************ 00:11:19.822 11:53:18 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:11:19.822 11:53:18 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:11:19.822 [2024-07-21 11:53:18.403260] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:19.822 [2024-07-21 11:53:18.403643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126895 ] 00:11:19.822 [2024-07-21 11:53:18.559059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.822 [2024-07-21 11:53:18.641139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:20.080 11:53:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:11:21.455 11:53:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:21.455 00:11:21.455 real 0m1.591s 00:11:21.455 user 0m1.351s 00:11:21.455 sys 0m0.184s 00:11:21.455 11:53:19 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:21.455 11:53:19 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:11:21.455 ************************************ 00:11:21.455 END TEST accel_compare 00:11:21.455 ************************************ 00:11:21.455 11:53:20 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:11:21.455 11:53:20 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:21.455 11:53:20 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:21.455 11:53:20 accel -- common/autotest_common.sh@10 -- # set +x 00:11:21.455 ************************************ 00:11:21.455 START TEST accel_xor 00:11:21.455 ************************************ 00:11:21.455 11:53:20 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:21.455 11:53:20 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:21.455 [2024-07-21 11:53:20.049481] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:21.455 [2024-07-21 11:53:20.049882] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126934 ] 00:11:21.455 [2024-07-21 11:53:20.204970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.455 [2024-07-21 11:53:20.314714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.713 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:21.714 11:53:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:23.089 00:11:23.089 real 0m1.641s 00:11:23.089 user 0m1.399s 00:11:23.089 sys 0m0.187s 00:11:23.089 11:53:21 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:23.089 11:53:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:23.089 ************************************ 00:11:23.089 END TEST accel_xor 00:11:23.089 ************************************ 00:11:23.089 11:53:21 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:11:23.089 11:53:21 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:11:23.089 11:53:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:23.089 11:53:21 accel -- common/autotest_common.sh@10 -- # set +x 00:11:23.089 ************************************ 00:11:23.089 START TEST accel_xor 00:11:23.089 ************************************ 00:11:23.089 11:53:21 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:11:23.089 11:53:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:11:23.089 [2024-07-21 11:53:21.743484] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:23.089 [2024-07-21 11:53:21.744028] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126985 ] 00:11:23.089 [2024-07-21 11:53:21.909416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.347 [2024-07-21 11:53:22.003904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:23.347 11:53:22 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:24.719 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:11:24.720 11:53:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:24.720 00:11:24.720 real 0m1.635s 00:11:24.720 user 0m1.370s 00:11:24.720 sys 0m0.199s 00:11:24.720 11:53:23 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:24.720 11:53:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:11:24.720 ************************************ 00:11:24.720 END TEST accel_xor 00:11:24.720 ************************************ 00:11:24.720 11:53:23 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:11:24.720 11:53:23 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:24.720 11:53:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:24.720 11:53:23 accel -- common/autotest_common.sh@10 -- # set +x 00:11:24.720 ************************************ 00:11:24.720 START TEST accel_dif_verify 00:11:24.720 ************************************ 00:11:24.720 11:53:23 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:11:24.720 11:53:23 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:11:24.720 [2024-07-21 11:53:23.428377] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:24.720 [2024-07-21 11:53:23.428706] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127025 ] 00:11:24.720 [2024-07-21 11:53:23.581864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.977 [2024-07-21 11:53:23.674793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.977 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:24.978 11:53:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:26.350 11:53:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:25 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:11:26.350 11:53:25 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:11:26.350 11:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:11:26.350 11:53:25 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:26.350 11:53:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:11:26.350 11:53:25 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:26.350 00:11:26.350 real 0m1.613s 00:11:26.350 user 0m1.335s 00:11:26.350 sys 0m0.214s 00:11:26.350 11:53:25 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:26.350 11:53:25 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:11:26.350 ************************************ 00:11:26.350 END TEST accel_dif_verify 00:11:26.350 ************************************ 00:11:26.350 11:53:25 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:11:26.350 11:53:25 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:26.350 11:53:25 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:26.350 11:53:25 accel -- common/autotest_common.sh@10 -- # set +x 00:11:26.350 ************************************ 00:11:26.350 START TEST accel_dif_generate 00:11:26.350 ************************************ 00:11:26.350 11:53:25 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:11:26.350 11:53:25 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:11:26.350 [2024-07-21 11:53:25.104109] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:26.350 [2024-07-21 11:53:25.104441] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127076 ] 00:11:26.608 [2024-07-21 11:53:25.273007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.608 [2024-07-21 11:53:25.360059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:26.608 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:26.609 11:53:25 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:11:28.000 11:53:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:28.000 00:11:28.000 real 0m1.637s 00:11:28.000 user 0m1.376s 00:11:28.000 sys 0m0.190s 00:11:28.000 11:53:26 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:28.000 11:53:26 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:11:28.000 ************************************ 00:11:28.000 END TEST accel_dif_generate 00:11:28.000 ************************************ 00:11:28.000 11:53:26 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:11:28.000 11:53:26 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:11:28.000 11:53:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:28.000 11:53:26 accel -- common/autotest_common.sh@10 -- # set +x 00:11:28.000 ************************************ 00:11:28.000 START TEST accel_dif_generate_copy 00:11:28.000 ************************************ 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:11:28.000 11:53:26 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:11:28.000 [2024-07-21 11:53:26.792177] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:28.000 [2024-07-21 11:53:26.793101] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127128 ] 00:11:28.277 [2024-07-21 11:53:26.961860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.277 [2024-07-21 11:53:27.061912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.536 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:28.537 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:28.537 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:28.537 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:28.537 11:53:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:29.913 00:11:29.913 real 0m1.664s 00:11:29.913 user 0m1.377s 00:11:29.913 sys 0m0.224s 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:29.913 ************************************ 00:11:29.913 END TEST accel_dif_generate_copy 00:11:29.913 ************************************ 00:11:29.913 11:53:28 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:11:29.913 11:53:28 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:11:29.913 11:53:28 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:29.913 11:53:28 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:11:29.913 11:53:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:29.913 11:53:28 accel -- common/autotest_common.sh@10 -- # set +x 00:11:29.913 ************************************ 00:11:29.913 START TEST accel_comp 00:11:29.913 ************************************ 00:11:29.913 11:53:28 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:29.913 11:53:28 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:11:29.914 11:53:28 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:11:29.914 [2024-07-21 11:53:28.513966] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:29.914 [2024-07-21 11:53:28.514222] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127166 ] 00:11:29.914 [2024-07-21 11:53:28.682236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.914 [2024-07-21 11:53:28.774333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.172 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:30.173 11:53:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:11:31.549 11:53:30 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:31.549 00:11:31.549 real 0m1.662s 00:11:31.549 user 0m1.373s 00:11:31.549 sys 0m0.210s 00:11:31.549 11:53:30 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:31.549 11:53:30 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:11:31.549 ************************************ 00:11:31.549 END TEST accel_comp 00:11:31.549 ************************************ 00:11:31.549 11:53:30 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:31.549 11:53:30 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:11:31.549 11:53:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:31.549 11:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:11:31.549 ************************************ 00:11:31.549 START TEST accel_decomp 00:11:31.549 ************************************ 00:11:31.549 11:53:30 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:11:31.549 11:53:30 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:11:31.549 [2024-07-21 11:53:30.227119] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:31.549 [2024-07-21 11:53:30.227547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127217 ] 00:11:31.549 [2024-07-21 11:53:30.394382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.807 [2024-07-21 11:53:30.498458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.807 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:31.808 11:53:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:11:33.179 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:11:33.180 11:53:31 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:11:33.180 11:53:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:33.180 11:53:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:33.180 11:53:31 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:33.180 00:11:33.180 real 0m1.685s 00:11:33.180 user 0m1.414s 00:11:33.180 sys 0m0.220s 00:11:33.180 11:53:31 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:33.180 11:53:31 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:11:33.180 ************************************ 00:11:33.180 END TEST accel_decomp 00:11:33.180 ************************************ 00:11:33.180 11:53:31 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:33.180 11:53:31 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:11:33.180 11:53:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:33.180 11:53:31 accel -- common/autotest_common.sh@10 -- # set +x 00:11:33.180 ************************************ 00:11:33.180 START TEST accel_decmop_full 00:11:33.180 ************************************ 00:11:33.180 11:53:31 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:11:33.180 11:53:31 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:11:33.180 [2024-07-21 11:53:31.968656] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:33.180 [2024-07-21 11:53:31.969097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127263 ] 00:11:33.437 [2024-07-21 11:53:32.134685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.437 [2024-07-21 11:53:32.226259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:33.696 11:53:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:35.070 11:53:33 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:35.070 00:11:35.070 real 0m1.654s 00:11:35.070 user 0m1.405s 00:11:35.070 sys 0m0.188s 00:11:35.070 11:53:33 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:35.070 11:53:33 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:11:35.070 ************************************ 00:11:35.070 END TEST accel_decmop_full 00:11:35.071 ************************************ 00:11:35.071 11:53:33 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:35.071 11:53:33 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:11:35.071 11:53:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:35.071 11:53:33 accel -- common/autotest_common.sh@10 -- # set +x 00:11:35.071 ************************************ 00:11:35.071 START TEST accel_decomp_mcore 00:11:35.071 ************************************ 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:35.071 11:53:33 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:35.071 [2024-07-21 11:53:33.679352] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:35.071 [2024-07-21 11:53:33.679775] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127384 ] 00:11:35.071 [2024-07-21 11:53:33.867486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.329 [2024-07-21 11:53:33.984768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.329 [2024-07-21 11:53:33.984911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.329 [2024-07-21 11:53:33.985067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.329 [2024-07-21 11:53:33.985065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:35.329 11:53:34 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:36.702 00:11:36.702 real 0m1.700s 00:11:36.702 user 0m5.132s 00:11:36.702 sys 0m0.214s 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:36.702 11:53:35 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:36.702 ************************************ 00:11:36.702 END TEST accel_decomp_mcore 00:11:36.702 ************************************ 00:11:36.703 11:53:35 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:36.703 11:53:35 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:36.703 11:53:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:36.703 11:53:35 accel -- common/autotest_common.sh@10 -- # set +x 00:11:36.703 ************************************ 00:11:36.703 START TEST accel_decomp_full_mcore 00:11:36.703 ************************************ 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:11:36.703 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:11:36.703 [2024-07-21 11:53:35.432583] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:36.703 [2024-07-21 11:53:35.433027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127433 ] 00:11:36.961 [2024-07-21 11:53:35.623021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:36.961 [2024-07-21 11:53:35.736501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:36.961 [2024-07-21 11:53:35.736601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:36.961 [2024-07-21 11:53:35.736742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:36.961 [2024-07-21 11:53:35.737015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.219 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:37.220 11:53:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:38.593 00:11:38.593 real 0m1.744s 00:11:38.593 user 0m5.218s 00:11:38.593 sys 0m0.227s 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:38.593 11:53:37 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:11:38.593 ************************************ 00:11:38.593 END TEST accel_decomp_full_mcore 00:11:38.593 ************************************ 00:11:38.593 11:53:37 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:38.593 11:53:37 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:11:38.593 11:53:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.593 11:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:11:38.593 ************************************ 00:11:38.593 START TEST accel_decomp_mthread 00:11:38.593 ************************************ 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:38.593 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:38.593 [2024-07-21 11:53:37.231446] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:38.593 [2024-07-21 11:53:37.231711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127487 ] 00:11:38.593 [2024-07-21 11:53:37.399820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.853 [2024-07-21 11:53:37.507522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.853 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:38.854 11:53:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:40.270 00:11:40.270 real 0m1.676s 00:11:40.270 user 0m1.414s 00:11:40.270 sys 0m0.198s 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:40.270 11:53:38 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:40.270 ************************************ 00:11:40.270 END TEST accel_decomp_mthread 00:11:40.270 ************************************ 00:11:40.270 11:53:38 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:40.270 11:53:38 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:11:40.270 11:53:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:40.270 11:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:11:40.270 ************************************ 00:11:40.270 START TEST accel_decomp_full_mthread 00:11:40.270 ************************************ 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:11:40.270 11:53:38 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:11:40.270 [2024-07-21 11:53:38.954648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:40.270 [2024-07-21 11:53:38.954871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127539 ] 00:11:40.270 [2024-07-21 11:53:39.120653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.529 [2024-07-21 11:53:39.257390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:40.529 11:53:39 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.901 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:41.902 00:11:41.902 real 0m1.716s 00:11:41.902 user 0m1.441s 00:11:41.902 sys 0m0.199s 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:41.902 11:53:40 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:41.902 ************************************ 00:11:41.902 END TEST accel_decomp_full_mthread 00:11:41.902 ************************************ 00:11:41.902 11:53:40 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:11:41.902 11:53:40 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:41.902 11:53:40 accel -- accel/accel.sh@137 -- # build_accel_config 00:11:41.902 11:53:40 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:41.902 11:53:40 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:41.902 11:53:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:41.902 11:53:40 accel -- common/autotest_common.sh@10 -- # set +x 00:11:41.902 11:53:40 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:11:41.902 11:53:40 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:41.902 11:53:40 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:41.902 11:53:40 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:41.902 11:53:40 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:41.902 11:53:40 accel -- accel/accel.sh@41 -- # jq -r . 00:11:41.902 ************************************ 00:11:41.902 START TEST accel_dif_functional_tests 00:11:41.902 ************************************ 00:11:41.902 11:53:40 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:41.902 [2024-07-21 11:53:40.752366] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:41.902 [2024-07-21 11:53:40.752739] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127577 ] 00:11:42.160 [2024-07-21 11:53:40.921467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.418 [2024-07-21 11:53:41.029910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.418 [2024-07-21 11:53:41.030044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.418 [2024-07-21 11:53:41.030050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.418 00:11:42.418 00:11:42.418 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.418 http://cunit.sourceforge.net/ 00:11:42.418 00:11:42.418 00:11:42.418 Suite: accel_dif 00:11:42.418 Test: verify: DIF generated, GUARD check ...passed 00:11:42.418 Test: verify: DIF generated, APPTAG check ...passed 00:11:42.418 Test: verify: DIF generated, REFTAG check ...passed 00:11:42.418 Test: verify: DIF not generated, GUARD check ...[2024-07-21 11:53:41.163796] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:42.418 passed 00:11:42.418 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 11:53:41.164619] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:42.418 passed 00:11:42.418 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 11:53:41.165150] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:42.418 passed 00:11:42.418 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:42.418 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 11:53:41.165897] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:42.418 passed 00:11:42.418 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:42.418 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:42.419 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:42.419 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-21 11:53:41.167172] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:42.419 passed 00:11:42.419 Test: verify copy: DIF generated, GUARD check ...passed 00:11:42.419 Test: verify copy: DIF generated, APPTAG check ...passed 00:11:42.419 Test: verify copy: DIF generated, REFTAG check ...passed 00:11:42.419 Test: verify copy: DIF not generated, GUARD check ...[2024-07-21 11:53:41.168568] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:42.419 passed 00:11:42.419 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-21 11:53:41.169172] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:42.419 passed 00:11:42.419 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-21 11:53:41.169708] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:42.419 passed 00:11:42.419 Test: generate copy: DIF generated, GUARD check ...passed 00:11:42.419 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:42.419 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:42.419 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:42.419 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:42.419 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:42.419 Test: generate copy: iovecs-len validate ...[2024-07-21 11:53:41.171464] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:42.419 passed 00:11:42.419 Test: generate copy: buffer alignment validate ...passed 00:11:42.419 00:11:42.419 Run Summary: Type Total Ran Passed Failed Inactive 00:11:42.419 suites 1 1 n/a 0 0 00:11:42.419 tests 26 26 26 0 0 00:11:42.419 asserts 115 115 115 0 n/a 00:11:42.419 00:11:42.419 Elapsed time = 0.018 seconds 00:11:42.677 00:11:42.677 real 0m0.819s 00:11:42.677 user 0m1.121s 00:11:42.677 sys 0m0.287s 00:11:42.677 11:53:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:42.677 11:53:41 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:11:42.677 ************************************ 00:11:42.677 END TEST accel_dif_functional_tests 00:11:42.677 ************************************ 00:11:42.934 ************************************ 00:11:42.934 END TEST accel 00:11:42.934 ************************************ 00:11:42.934 00:11:42.934 real 0m38.703s 00:11:42.934 user 0m39.940s 00:11:42.934 sys 0m5.988s 00:11:42.934 11:53:41 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:42.934 11:53:41 accel -- common/autotest_common.sh@10 -- # set +x 00:11:42.934 11:53:41 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:42.934 11:53:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:42.934 11:53:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:42.934 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:11:42.934 ************************************ 00:11:42.934 START TEST accel_rpc 00:11:42.934 ************************************ 00:11:42.935 11:53:41 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:42.935 * Looking for test storage... 00:11:42.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:42.935 11:53:41 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:42.935 11:53:41 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=127657 00:11:42.935 11:53:41 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:42.935 11:53:41 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 127657 00:11:42.935 11:53:41 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 127657 ']' 00:11:42.935 11:53:41 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.935 11:53:41 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:42.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.935 11:53:41 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.935 11:53:41 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:42.935 11:53:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.935 [2024-07-21 11:53:41.733111] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:42.935 [2024-07-21 11:53:41.733304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127657 ] 00:11:43.193 [2024-07-21 11:53:41.882532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.193 [2024-07-21 11:53:41.983922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.128 11:53:42 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:44.128 11:53:42 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:44.128 11:53:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:44.128 11:53:42 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:44.128 11:53:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:44.128 11:53:42 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:44.128 11:53:42 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:44.128 11:53:42 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:44.128 11:53:42 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:44.128 11:53:42 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 ************************************ 00:11:44.128 START TEST accel_assign_opcode 00:11:44.128 ************************************ 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 [2024-07-21 11:53:42.745027] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:44.128 [2024-07-21 11:53:42.752976] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.128 11:53:42 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:44.386 11:53:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.386 11:53:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:44.386 11:53:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:44.386 11:53:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:44.386 11:53:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:44.386 11:53:43 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:11:44.386 11:53:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:44.386 software 00:11:44.386 00:11:44.386 real 0m0.394s 00:11:44.387 user 0m0.047s 00:11:44.387 sys 0m0.011s 00:11:44.387 11:53:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:44.387 ************************************ 00:11:44.387 END TEST accel_assign_opcode 00:11:44.387 ************************************ 00:11:44.387 11:53:43 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:44.387 11:53:43 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 127657 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 127657 ']' 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 127657 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127657 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:44.387 killing process with pid 127657 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127657' 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@965 -- # kill 127657 00:11:44.387 11:53:43 accel_rpc -- common/autotest_common.sh@970 -- # wait 127657 00:11:45.322 ************************************ 00:11:45.322 END TEST accel_rpc 00:11:45.322 ************************************ 00:11:45.322 00:11:45.322 real 0m2.235s 00:11:45.322 user 0m2.267s 00:11:45.322 sys 0m0.534s 00:11:45.322 11:53:43 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:45.322 11:53:43 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.322 11:53:43 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:45.322 11:53:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:45.322 11:53:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:45.322 11:53:43 -- common/autotest_common.sh@10 -- # set +x 00:11:45.322 ************************************ 00:11:45.322 START TEST app_cmdline 00:11:45.322 ************************************ 00:11:45.322 11:53:43 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:45.322 * Looking for test storage... 00:11:45.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:45.322 11:53:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:45.322 11:53:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=127778 00:11:45.322 11:53:43 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:45.322 11:53:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 127778 00:11:45.322 11:53:43 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 127778 ']' 00:11:45.322 11:53:43 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.322 11:53:43 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:45.322 11:53:43 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.322 11:53:43 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:45.322 11:53:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:45.322 [2024-07-21 11:53:44.027888] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:45.322 [2024-07-21 11:53:44.028114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127778 ] 00:11:45.322 [2024-07-21 11:53:44.182990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.580 [2024-07-21 11:53:44.290074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:46.514 { 00:11:46.514 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:11:46.514 "fields": { 00:11:46.514 "major": 24, 00:11:46.514 "minor": 5, 00:11:46.514 "patch": 1, 00:11:46.514 "suffix": "-pre", 00:11:46.514 "commit": "5fa2f5086" 00:11:46.514 } 00:11:46.514 } 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:46.514 11:53:45 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:46.514 11:53:45 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:46.772 request: 00:11:46.772 { 00:11:46.772 "method": "env_dpdk_get_mem_stats", 00:11:46.772 "req_id": 1 00:11:46.772 } 00:11:46.772 Got JSON-RPC error response 00:11:46.772 response: 00:11:46.772 { 00:11:46.772 "code": -32601, 00:11:46.772 "message": "Method not found" 00:11:46.772 } 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:46.772 11:53:45 app_cmdline -- app/cmdline.sh@1 -- # killprocess 127778 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 127778 ']' 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 127778 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127778 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:46.772 killing process with pid 127778 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127778' 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@965 -- # kill 127778 00:11:46.772 11:53:45 app_cmdline -- common/autotest_common.sh@970 -- # wait 127778 00:11:47.704 ************************************ 00:11:47.704 END TEST app_cmdline 00:11:47.704 ************************************ 00:11:47.704 00:11:47.704 real 0m2.322s 00:11:47.704 user 0m2.725s 00:11:47.704 sys 0m0.599s 00:11:47.704 11:53:46 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:47.704 11:53:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:47.704 11:53:46 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:47.704 11:53:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:47.704 11:53:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.704 11:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:47.704 ************************************ 00:11:47.704 START TEST version 00:11:47.704 ************************************ 00:11:47.704 11:53:46 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:47.704 * Looking for test storage... 00:11:47.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:47.704 11:53:46 version -- app/version.sh@17 -- # get_header_version major 00:11:47.704 11:53:46 version -- app/version.sh@14 -- # cut -f2 00:11:47.704 11:53:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:47.704 11:53:46 version -- app/version.sh@14 -- # tr -d '"' 00:11:47.705 11:53:46 version -- app/version.sh@17 -- # major=24 00:11:47.705 11:53:46 version -- app/version.sh@18 -- # get_header_version minor 00:11:47.705 11:53:46 version -- app/version.sh@14 -- # cut -f2 00:11:47.705 11:53:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:47.705 11:53:46 version -- app/version.sh@14 -- # tr -d '"' 00:11:47.705 11:53:46 version -- app/version.sh@18 -- # minor=5 00:11:47.705 11:53:46 version -- app/version.sh@19 -- # get_header_version patch 00:11:47.705 11:53:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:47.705 11:53:46 version -- app/version.sh@14 -- # cut -f2 00:11:47.705 11:53:46 version -- app/version.sh@14 -- # tr -d '"' 00:11:47.705 11:53:46 version -- app/version.sh@19 -- # patch=1 00:11:47.705 11:53:46 version -- app/version.sh@20 -- # get_header_version suffix 00:11:47.705 11:53:46 version -- app/version.sh@14 -- # cut -f2 00:11:47.705 11:53:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:47.705 11:53:46 version -- app/version.sh@14 -- # tr -d '"' 00:11:47.705 11:53:46 version -- app/version.sh@20 -- # suffix=-pre 00:11:47.705 11:53:46 version -- app/version.sh@22 -- # version=24.5 00:11:47.705 11:53:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:47.705 11:53:46 version -- app/version.sh@25 -- # version=24.5.1 00:11:47.705 11:53:46 version -- app/version.sh@28 -- # version=24.5.1rc0 00:11:47.705 11:53:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:47.705 11:53:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:47.705 11:53:46 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:11:47.705 11:53:46 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:11:47.705 ************************************ 00:11:47.705 END TEST version 00:11:47.705 ************************************ 00:11:47.705 00:11:47.705 real 0m0.152s 00:11:47.705 user 0m0.111s 00:11:47.705 sys 0m0.076s 00:11:47.705 11:53:46 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:47.705 11:53:46 version -- common/autotest_common.sh@10 -- # set +x 00:11:47.705 11:53:46 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:11:47.705 11:53:46 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:47.705 11:53:46 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:47.705 11:53:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:47.705 11:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:47.705 ************************************ 00:11:47.705 START TEST blockdev_general 00:11:47.705 ************************************ 00:11:47.705 11:53:46 blockdev_general -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:47.705 * Looking for test storage... 00:11:47.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:47.705 11:53:46 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:11:47.705 11:53:46 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=127933 00:11:47.963 11:53:46 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:47.963 11:53:46 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:47.963 11:53:46 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 127933 00:11:47.963 11:53:46 blockdev_general -- common/autotest_common.sh@827 -- # '[' -z 127933 ']' 00:11:47.963 11:53:46 blockdev_general -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.963 11:53:46 blockdev_general -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:47.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.963 11:53:46 blockdev_general -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.963 11:53:46 blockdev_general -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:47.963 11:53:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:47.963 [2024-07-21 11:53:46.639032] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:47.963 [2024-07-21 11:53:46.639769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127933 ] 00:11:47.963 [2024-07-21 11:53:46.803250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.221 [2024-07-21 11:53:46.877913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.787 11:53:47 blockdev_general -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:48.787 11:53:47 blockdev_general -- common/autotest_common.sh@860 -- # return 0 00:11:48.787 11:53:47 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:11:48.787 11:53:47 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:11:48.787 11:53:47 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:11:48.787 11:53:47 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.787 11:53:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.353 [2024-07-21 11:53:47.944765] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:49.353 [2024-07-21 11:53:47.944877] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:49.353 00:11:49.353 [2024-07-21 11:53:47.952676] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:49.353 [2024-07-21 11:53:47.952749] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:49.353 00:11:49.353 Malloc0 00:11:49.353 Malloc1 00:11:49.353 Malloc2 00:11:49.353 Malloc3 00:11:49.353 Malloc4 00:11:49.353 Malloc5 00:11:49.353 Malloc6 00:11:49.353 Malloc7 00:11:49.353 Malloc8 00:11:49.353 Malloc9 00:11:49.353 [2024-07-21 11:53:48.169463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:49.353 [2024-07-21 11:53:48.169575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.353 [2024-07-21 11:53:48.169631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:11:49.353 [2024-07-21 11:53:48.169655] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.353 [2024-07-21 11:53:48.172290] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.353 [2024-07-21 11:53:48.172381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:49.353 TestPT 00:11:49.353 11:53:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.353 11:53:48 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:49.612 5000+0 records in 00:11:49.612 5000+0 records out 00:11:49.612 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0216819 s, 472 MB/s 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.612 AIO0 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:49.612 11:53:48 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:11:49.612 11:53:48 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:11:49.613 11:53:48 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "57604208-ec50-4e95-b027-42b1e5b8cc95"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "57604208-ec50-4e95-b027-42b1e5b8cc95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5fbbd973-6c68-58ff-bcb0-f4d74b17dc1a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5fbbd973-6c68-58ff-bcb0-f4d74b17dc1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ffee020e-c9bd-56ab-be9f-32685e3c5d8c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ffee020e-c9bd-56ab-be9f-32685e3c5d8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "66d13f1c-b62e-5885-b0f4-473c6a5097fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66d13f1c-b62e-5885-b0f4-473c6a5097fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "3d4cff75-ba0d-5e76-903c-3fd4a8b623b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d4cff75-ba0d-5e76-903c-3fd4a8b623b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "133ff6dd-0f0c-5f3d-8fa1-b3058481f83e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "133ff6dd-0f0c-5f3d-8fa1-b3058481f83e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ba4a44f0-0599-54b6-94c3-22758c2839de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ba4a44f0-0599-54b6-94c3-22758c2839de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b4845a15-2e4e-5dd3-a38f-971d43bc682f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4845a15-2e4e-5dd3-a38f-971d43bc682f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "5722021f-d286-51a3-9df5-f79953a7e262"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5722021f-d286-51a3-9df5-f79953a7e262",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "324b904d-99c1-589f-b514-090b3a2513ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "324b904d-99c1-589f-b514-090b3a2513ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d67b4f9d-407f-5b45-b751-1205a5eee490"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d67b4f9d-407f-5b45-b751-1205a5eee490",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "c6ec2d5c-d72d-5a33-818e-edeac579c00a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c6ec2d5c-d72d-5a33-818e-edeac579c00a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "02944355-d0ad-474a-a774-e3e95ea43665"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02944355-d0ad-474a-a774-e3e95ea43665",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "02944355-d0ad-474a-a774-e3e95ea43665",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c60f576d-08b3-40b4-86b7-cd812316eaf3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "83ef698e-5720-49e1-8a2c-5ec45a539984",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "bcefb08f-2d86-4673-aef5-b7e9417ece8f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bcefb08f-2d86-4673-aef5-b7e9417ece8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bcefb08f-2d86-4673-aef5-b7e9417ece8f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "98d18dd4-f5bd-4962-baf9-71e444eebf6f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "72ecb791-bcaf-449a-871c-aa20bf252e5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "8b153573-42de-49bf-a29d-a8506eab0c67"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8b153573-42de-49bf-a29d-a8506eab0c67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8b153573-42de-49bf-a29d-a8506eab0c67",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b518bc90-1777-4e30-b5ea-4c07f99a66cb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77a8a482-2f14-4907-b944-1c0dbd13f1bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "184a5368-019c-4651-9d54-30b5bd99faf2"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "184a5368-019c-4651-9d54-30b5bd99faf2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:49.872 11:53:48 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:11:49.872 11:53:48 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:11:49.872 11:53:48 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:11:49.872 11:53:48 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 127933 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@946 -- # '[' -z 127933 ']' 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@950 -- # kill -0 127933 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@951 -- # uname 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 127933 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@964 -- # echo 'killing process with pid 127933' 00:11:49.872 killing process with pid 127933 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@965 -- # kill 127933 00:11:49.872 11:53:48 blockdev_general -- common/autotest_common.sh@970 -- # wait 127933 00:11:50.808 11:53:49 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:50.808 11:53:49 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:50.808 11:53:49 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:11:50.808 11:53:49 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.808 11:53:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:50.808 ************************************ 00:11:50.808 START TEST bdev_hello_world 00:11:50.808 ************************************ 00:11:50.808 11:53:49 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:50.808 [2024-07-21 11:53:49.415104] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:50.808 [2024-07-21 11:53:49.416005] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127999 ] 00:11:50.808 [2024-07-21 11:53:49.581757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.808 [2024-07-21 11:53:49.656831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.065 [2024-07-21 11:53:49.841566] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.065 [2024-07-21 11:53:49.841716] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:51.065 [2024-07-21 11:53:49.849465] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.065 [2024-07-21 11:53:49.849515] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:51.065 [2024-07-21 11:53:49.857520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.065 [2024-07-21 11:53:49.857589] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:51.066 [2024-07-21 11:53:49.857634] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:51.323 [2024-07-21 11:53:49.974702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:51.323 [2024-07-21 11:53:49.974844] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.323 [2024-07-21 11:53:49.974886] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:51.323 [2024-07-21 11:53:49.974927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.323 [2024-07-21 11:53:49.977637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.323 [2024-07-21 11:53:49.977699] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:51.323 [2024-07-21 11:53:50.163435] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:51.323 [2024-07-21 11:53:50.163555] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:51.323 [2024-07-21 11:53:50.163686] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:51.323 [2024-07-21 11:53:50.163788] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:51.323 [2024-07-21 11:53:50.163958] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:51.323 [2024-07-21 11:53:50.164042] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:51.323 [2024-07-21 11:53:50.164174] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:51.323 00:11:51.323 [2024-07-21 11:53:50.164247] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:51.891 00:11:51.891 real 0m1.303s 00:11:51.891 user 0m0.739s 00:11:51.891 sys 0m0.412s 00:11:51.891 11:53:50 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:51.891 11:53:50 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:51.891 ************************************ 00:11:51.891 END TEST bdev_hello_world 00:11:51.891 ************************************ 00:11:51.891 11:53:50 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:11:51.891 11:53:50 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:51.891 11:53:50 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:51.891 11:53:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:51.891 ************************************ 00:11:51.891 START TEST bdev_bounds 00:11:51.891 ************************************ 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=128037 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 128037' 00:11:51.891 Process bdevio pid: 128037 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 128037 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 128037 ']' 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:51.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:51.891 11:53:50 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:52.149 [2024-07-21 11:53:50.767405] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:52.149 [2024-07-21 11:53:50.767971] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128037 ] 00:11:52.149 [2024-07-21 11:53:50.946933] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:52.407 [2024-07-21 11:53:51.028957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.407 [2024-07-21 11:53:51.029121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.407 [2024-07-21 11:53:51.029127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.407 [2024-07-21 11:53:51.218897] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.407 [2024-07-21 11:53:51.219061] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:52.407 [2024-07-21 11:53:51.226800] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:52.407 [2024-07-21 11:53:51.226885] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:52.407 [2024-07-21 11:53:51.234899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:52.407 [2024-07-21 11:53:51.235021] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:52.407 [2024-07-21 11:53:51.235065] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:52.679 [2024-07-21 11:53:51.348328] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:52.679 [2024-07-21 11:53:51.348487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.679 [2024-07-21 11:53:51.348566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:52.679 [2024-07-21 11:53:51.348601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.679 [2024-07-21 11:53:51.351309] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.679 [2024-07-21 11:53:51.351398] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:52.977 11:53:51 blockdev_general.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:52.977 11:53:51 blockdev_general.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:11:52.977 11:53:51 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:53.236 I/O targets: 00:11:53.236 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:53.236 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:53.236 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:53.236 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:53.236 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:53.236 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:53.236 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:53.236 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:53.236 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:53.236 00:11:53.236 00:11:53.236 CUnit - A unit testing framework for C - Version 2.1-3 00:11:53.236 http://cunit.sourceforge.net/ 00:11:53.236 00:11:53.236 00:11:53.236 Suite: bdevio tests on: AIO0 00:11:53.236 Test: blockdev write read block ...passed 00:11:53.236 Test: blockdev write zeroes read block ...passed 00:11:53.236 Test: blockdev write zeroes read no split ...passed 00:11:53.236 Test: blockdev write zeroes read split ...passed 00:11:53.236 Test: blockdev write zeroes read split partial ...passed 00:11:53.236 Test: blockdev reset ...passed 00:11:53.236 Test: blockdev write read 8 blocks ...passed 00:11:53.236 Test: blockdev write read size > 128k ...passed 00:11:53.236 Test: blockdev write read invalid size ...passed 00:11:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.236 Test: blockdev write read max offset ...passed 00:11:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.236 Test: blockdev writev readv 8 blocks ...passed 00:11:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.236 Test: blockdev writev readv block ...passed 00:11:53.236 Test: blockdev writev readv size > 128k ...passed 00:11:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.236 Test: blockdev comparev and writev ...passed 00:11:53.236 Test: blockdev nvme passthru rw ...passed 00:11:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.236 Test: blockdev nvme admin passthru ...passed 00:11:53.236 Test: blockdev copy ...passed 00:11:53.236 Suite: bdevio tests on: raid1 00:11:53.236 Test: blockdev write read block ...passed 00:11:53.236 Test: blockdev write zeroes read block ...passed 00:11:53.236 Test: blockdev write zeroes read no split ...passed 00:11:53.236 Test: blockdev write zeroes read split ...passed 00:11:53.236 Test: blockdev write zeroes read split partial ...passed 00:11:53.236 Test: blockdev reset ...passed 00:11:53.236 Test: blockdev write read 8 blocks ...passed 00:11:53.236 Test: blockdev write read size > 128k ...passed 00:11:53.236 Test: blockdev write read invalid size ...passed 00:11:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.236 Test: blockdev write read max offset ...passed 00:11:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.236 Test: blockdev writev readv 8 blocks ...passed 00:11:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.236 Test: blockdev writev readv block ...passed 00:11:53.236 Test: blockdev writev readv size > 128k ...passed 00:11:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.236 Test: blockdev comparev and writev ...passed 00:11:53.236 Test: blockdev nvme passthru rw ...passed 00:11:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.236 Test: blockdev nvme admin passthru ...passed 00:11:53.236 Test: blockdev copy ...passed 00:11:53.236 Suite: bdevio tests on: concat0 00:11:53.236 Test: blockdev write read block ...passed 00:11:53.236 Test: blockdev write zeroes read block ...passed 00:11:53.236 Test: blockdev write zeroes read no split ...passed 00:11:53.236 Test: blockdev write zeroes read split ...passed 00:11:53.236 Test: blockdev write zeroes read split partial ...passed 00:11:53.236 Test: blockdev reset ...passed 00:11:53.236 Test: blockdev write read 8 blocks ...passed 00:11:53.236 Test: blockdev write read size > 128k ...passed 00:11:53.236 Test: blockdev write read invalid size ...passed 00:11:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.236 Test: blockdev write read max offset ...passed 00:11:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.236 Test: blockdev writev readv 8 blocks ...passed 00:11:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.236 Test: blockdev writev readv block ...passed 00:11:53.236 Test: blockdev writev readv size > 128k ...passed 00:11:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.236 Test: blockdev comparev and writev ...passed 00:11:53.236 Test: blockdev nvme passthru rw ...passed 00:11:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.236 Test: blockdev nvme admin passthru ...passed 00:11:53.236 Test: blockdev copy ...passed 00:11:53.236 Suite: bdevio tests on: raid0 00:11:53.236 Test: blockdev write read block ...passed 00:11:53.236 Test: blockdev write zeroes read block ...passed 00:11:53.236 Test: blockdev write zeroes read no split ...passed 00:11:53.236 Test: blockdev write zeroes read split ...passed 00:11:53.236 Test: blockdev write zeroes read split partial ...passed 00:11:53.236 Test: blockdev reset ...passed 00:11:53.236 Test: blockdev write read 8 blocks ...passed 00:11:53.236 Test: blockdev write read size > 128k ...passed 00:11:53.236 Test: blockdev write read invalid size ...passed 00:11:53.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.236 Test: blockdev write read max offset ...passed 00:11:53.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.236 Test: blockdev writev readv 8 blocks ...passed 00:11:53.236 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.236 Test: blockdev writev readv block ...passed 00:11:53.236 Test: blockdev writev readv size > 128k ...passed 00:11:53.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.236 Test: blockdev comparev and writev ...passed 00:11:53.236 Test: blockdev nvme passthru rw ...passed 00:11:53.236 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.236 Test: blockdev nvme admin passthru ...passed 00:11:53.236 Test: blockdev copy ...passed 00:11:53.237 Suite: bdevio tests on: TestPT 00:11:53.237 Test: blockdev write read block ...passed 00:11:53.237 Test: blockdev write zeroes read block ...passed 00:11:53.237 Test: blockdev write zeroes read no split ...passed 00:11:53.237 Test: blockdev write zeroes read split ...passed 00:11:53.237 Test: blockdev write zeroes read split partial ...passed 00:11:53.237 Test: blockdev reset ...passed 00:11:53.237 Test: blockdev write read 8 blocks ...passed 00:11:53.237 Test: blockdev write read size > 128k ...passed 00:11:53.237 Test: blockdev write read invalid size ...passed 00:11:53.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.237 Test: blockdev write read max offset ...passed 00:11:53.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.237 Test: blockdev writev readv 8 blocks ...passed 00:11:53.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.237 Test: blockdev writev readv block ...passed 00:11:53.237 Test: blockdev writev readv size > 128k ...passed 00:11:53.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.237 Test: blockdev comparev and writev ...passed 00:11:53.237 Test: blockdev nvme passthru rw ...passed 00:11:53.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.237 Test: blockdev nvme admin passthru ...passed 00:11:53.237 Test: blockdev copy ...passed 00:11:53.237 Suite: bdevio tests on: Malloc2p7 00:11:53.237 Test: blockdev write read block ...passed 00:11:53.237 Test: blockdev write zeroes read block ...passed 00:11:53.237 Test: blockdev write zeroes read no split ...passed 00:11:53.237 Test: blockdev write zeroes read split ...passed 00:11:53.237 Test: blockdev write zeroes read split partial ...passed 00:11:53.237 Test: blockdev reset ...passed 00:11:53.237 Test: blockdev write read 8 blocks ...passed 00:11:53.237 Test: blockdev write read size > 128k ...passed 00:11:53.237 Test: blockdev write read invalid size ...passed 00:11:53.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.237 Test: blockdev write read max offset ...passed 00:11:53.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.237 Test: blockdev writev readv 8 blocks ...passed 00:11:53.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.237 Test: blockdev writev readv block ...passed 00:11:53.237 Test: blockdev writev readv size > 128k ...passed 00:11:53.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.237 Test: blockdev comparev and writev ...passed 00:11:53.237 Test: blockdev nvme passthru rw ...passed 00:11:53.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.237 Test: blockdev nvme admin passthru ...passed 00:11:53.237 Test: blockdev copy ...passed 00:11:53.237 Suite: bdevio tests on: Malloc2p6 00:11:53.237 Test: blockdev write read block ...passed 00:11:53.237 Test: blockdev write zeroes read block ...passed 00:11:53.237 Test: blockdev write zeroes read no split ...passed 00:11:53.237 Test: blockdev write zeroes read split ...passed 00:11:53.237 Test: blockdev write zeroes read split partial ...passed 00:11:53.237 Test: blockdev reset ...passed 00:11:53.237 Test: blockdev write read 8 blocks ...passed 00:11:53.237 Test: blockdev write read size > 128k ...passed 00:11:53.237 Test: blockdev write read invalid size ...passed 00:11:53.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.237 Test: blockdev write read max offset ...passed 00:11:53.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.237 Test: blockdev writev readv 8 blocks ...passed 00:11:53.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.237 Test: blockdev writev readv block ...passed 00:11:53.237 Test: blockdev writev readv size > 128k ...passed 00:11:53.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.237 Test: blockdev comparev and writev ...passed 00:11:53.237 Test: blockdev nvme passthru rw ...passed 00:11:53.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.237 Test: blockdev nvme admin passthru ...passed 00:11:53.237 Test: blockdev copy ...passed 00:11:53.237 Suite: bdevio tests on: Malloc2p5 00:11:53.237 Test: blockdev write read block ...passed 00:11:53.237 Test: blockdev write zeroes read block ...passed 00:11:53.237 Test: blockdev write zeroes read no split ...passed 00:11:53.237 Test: blockdev write zeroes read split ...passed 00:11:53.237 Test: blockdev write zeroes read split partial ...passed 00:11:53.237 Test: blockdev reset ...passed 00:11:53.237 Test: blockdev write read 8 blocks ...passed 00:11:53.237 Test: blockdev write read size > 128k ...passed 00:11:53.237 Test: blockdev write read invalid size ...passed 00:11:53.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.237 Test: blockdev write read max offset ...passed 00:11:53.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.237 Test: blockdev writev readv 8 blocks ...passed 00:11:53.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.237 Test: blockdev writev readv block ...passed 00:11:53.237 Test: blockdev writev readv size > 128k ...passed 00:11:53.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.237 Test: blockdev comparev and writev ...passed 00:11:53.237 Test: blockdev nvme passthru rw ...passed 00:11:53.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.237 Test: blockdev nvme admin passthru ...passed 00:11:53.237 Test: blockdev copy ...passed 00:11:53.237 Suite: bdevio tests on: Malloc2p4 00:11:53.237 Test: blockdev write read block ...passed 00:11:53.237 Test: blockdev write zeroes read block ...passed 00:11:53.237 Test: blockdev write zeroes read no split ...passed 00:11:53.237 Test: blockdev write zeroes read split ...passed 00:11:53.237 Test: blockdev write zeroes read split partial ...passed 00:11:53.237 Test: blockdev reset ...passed 00:11:53.237 Test: blockdev write read 8 blocks ...passed 00:11:53.237 Test: blockdev write read size > 128k ...passed 00:11:53.237 Test: blockdev write read invalid size ...passed 00:11:53.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.237 Test: blockdev write read max offset ...passed 00:11:53.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.237 Test: blockdev writev readv 8 blocks ...passed 00:11:53.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.237 Test: blockdev writev readv block ...passed 00:11:53.237 Test: blockdev writev readv size > 128k ...passed 00:11:53.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.237 Test: blockdev comparev and writev ...passed 00:11:53.237 Test: blockdev nvme passthru rw ...passed 00:11:53.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.237 Test: blockdev nvme admin passthru ...passed 00:11:53.237 Test: blockdev copy ...passed 00:11:53.237 Suite: bdevio tests on: Malloc2p3 00:11:53.237 Test: blockdev write read block ...passed 00:11:53.237 Test: blockdev write zeroes read block ...passed 00:11:53.237 Test: blockdev write zeroes read no split ...passed 00:11:53.237 Test: blockdev write zeroes read split ...passed 00:11:53.237 Test: blockdev write zeroes read split partial ...passed 00:11:53.237 Test: blockdev reset ...passed 00:11:53.237 Test: blockdev write read 8 blocks ...passed 00:11:53.237 Test: blockdev write read size > 128k ...passed 00:11:53.237 Test: blockdev write read invalid size ...passed 00:11:53.237 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.237 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.237 Test: blockdev write read max offset ...passed 00:11:53.237 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.237 Test: blockdev writev readv 8 blocks ...passed 00:11:53.237 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.237 Test: blockdev writev readv block ...passed 00:11:53.237 Test: blockdev writev readv size > 128k ...passed 00:11:53.237 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.237 Test: blockdev comparev and writev ...passed 00:11:53.237 Test: blockdev nvme passthru rw ...passed 00:11:53.237 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.237 Test: blockdev nvme admin passthru ...passed 00:11:53.237 Test: blockdev copy ...passed 00:11:53.237 Suite: bdevio tests on: Malloc2p2 00:11:53.237 Test: blockdev write read block ...passed 00:11:53.237 Test: blockdev write zeroes read block ...passed 00:11:53.237 Test: blockdev write zeroes read no split ...passed 00:11:53.496 Test: blockdev write zeroes read split ...passed 00:11:53.496 Test: blockdev write zeroes read split partial ...passed 00:11:53.496 Test: blockdev reset ...passed 00:11:53.496 Test: blockdev write read 8 blocks ...passed 00:11:53.496 Test: blockdev write read size > 128k ...passed 00:11:53.496 Test: blockdev write read invalid size ...passed 00:11:53.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.496 Test: blockdev write read max offset ...passed 00:11:53.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.496 Test: blockdev writev readv 8 blocks ...passed 00:11:53.496 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.496 Test: blockdev writev readv block ...passed 00:11:53.496 Test: blockdev writev readv size > 128k ...passed 00:11:53.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.496 Test: blockdev comparev and writev ...passed 00:11:53.496 Test: blockdev nvme passthru rw ...passed 00:11:53.496 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.496 Test: blockdev nvme admin passthru ...passed 00:11:53.496 Test: blockdev copy ...passed 00:11:53.496 Suite: bdevio tests on: Malloc2p1 00:11:53.496 Test: blockdev write read block ...passed 00:11:53.496 Test: blockdev write zeroes read block ...passed 00:11:53.496 Test: blockdev write zeroes read no split ...passed 00:11:53.496 Test: blockdev write zeroes read split ...passed 00:11:53.496 Test: blockdev write zeroes read split partial ...passed 00:11:53.496 Test: blockdev reset ...passed 00:11:53.496 Test: blockdev write read 8 blocks ...passed 00:11:53.496 Test: blockdev write read size > 128k ...passed 00:11:53.496 Test: blockdev write read invalid size ...passed 00:11:53.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.496 Test: blockdev write read max offset ...passed 00:11:53.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.496 Test: blockdev writev readv 8 blocks ...passed 00:11:53.496 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.496 Test: blockdev writev readv block ...passed 00:11:53.496 Test: blockdev writev readv size > 128k ...passed 00:11:53.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.496 Test: blockdev comparev and writev ...passed 00:11:53.496 Test: blockdev nvme passthru rw ...passed 00:11:53.496 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.496 Test: blockdev nvme admin passthru ...passed 00:11:53.496 Test: blockdev copy ...passed 00:11:53.496 Suite: bdevio tests on: Malloc2p0 00:11:53.496 Test: blockdev write read block ...passed 00:11:53.497 Test: blockdev write zeroes read block ...passed 00:11:53.497 Test: blockdev write zeroes read no split ...passed 00:11:53.497 Test: blockdev write zeroes read split ...passed 00:11:53.497 Test: blockdev write zeroes read split partial ...passed 00:11:53.497 Test: blockdev reset ...passed 00:11:53.497 Test: blockdev write read 8 blocks ...passed 00:11:53.497 Test: blockdev write read size > 128k ...passed 00:11:53.497 Test: blockdev write read invalid size ...passed 00:11:53.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.497 Test: blockdev write read max offset ...passed 00:11:53.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.497 Test: blockdev writev readv 8 blocks ...passed 00:11:53.497 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.497 Test: blockdev writev readv block ...passed 00:11:53.497 Test: blockdev writev readv size > 128k ...passed 00:11:53.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.497 Test: blockdev comparev and writev ...passed 00:11:53.497 Test: blockdev nvme passthru rw ...passed 00:11:53.497 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.497 Test: blockdev nvme admin passthru ...passed 00:11:53.497 Test: blockdev copy ...passed 00:11:53.497 Suite: bdevio tests on: Malloc1p1 00:11:53.497 Test: blockdev write read block ...passed 00:11:53.497 Test: blockdev write zeroes read block ...passed 00:11:53.497 Test: blockdev write zeroes read no split ...passed 00:11:53.497 Test: blockdev write zeroes read split ...passed 00:11:53.497 Test: blockdev write zeroes read split partial ...passed 00:11:53.497 Test: blockdev reset ...passed 00:11:53.497 Test: blockdev write read 8 blocks ...passed 00:11:53.497 Test: blockdev write read size > 128k ...passed 00:11:53.497 Test: blockdev write read invalid size ...passed 00:11:53.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.497 Test: blockdev write read max offset ...passed 00:11:53.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.497 Test: blockdev writev readv 8 blocks ...passed 00:11:53.497 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.497 Test: blockdev writev readv block ...passed 00:11:53.497 Test: blockdev writev readv size > 128k ...passed 00:11:53.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.497 Test: blockdev comparev and writev ...passed 00:11:53.497 Test: blockdev nvme passthru rw ...passed 00:11:53.497 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.497 Test: blockdev nvme admin passthru ...passed 00:11:53.497 Test: blockdev copy ...passed 00:11:53.497 Suite: bdevio tests on: Malloc1p0 00:11:53.497 Test: blockdev write read block ...passed 00:11:53.497 Test: blockdev write zeroes read block ...passed 00:11:53.497 Test: blockdev write zeroes read no split ...passed 00:11:53.497 Test: blockdev write zeroes read split ...passed 00:11:53.497 Test: blockdev write zeroes read split partial ...passed 00:11:53.497 Test: blockdev reset ...passed 00:11:53.497 Test: blockdev write read 8 blocks ...passed 00:11:53.497 Test: blockdev write read size > 128k ...passed 00:11:53.497 Test: blockdev write read invalid size ...passed 00:11:53.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.497 Test: blockdev write read max offset ...passed 00:11:53.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.497 Test: blockdev writev readv 8 blocks ...passed 00:11:53.497 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.497 Test: blockdev writev readv block ...passed 00:11:53.497 Test: blockdev writev readv size > 128k ...passed 00:11:53.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.497 Test: blockdev comparev and writev ...passed 00:11:53.497 Test: blockdev nvme passthru rw ...passed 00:11:53.497 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.497 Test: blockdev nvme admin passthru ...passed 00:11:53.497 Test: blockdev copy ...passed 00:11:53.497 Suite: bdevio tests on: Malloc0 00:11:53.497 Test: blockdev write read block ...passed 00:11:53.497 Test: blockdev write zeroes read block ...passed 00:11:53.497 Test: blockdev write zeroes read no split ...passed 00:11:53.497 Test: blockdev write zeroes read split ...passed 00:11:53.497 Test: blockdev write zeroes read split partial ...passed 00:11:53.497 Test: blockdev reset ...passed 00:11:53.497 Test: blockdev write read 8 blocks ...passed 00:11:53.497 Test: blockdev write read size > 128k ...passed 00:11:53.497 Test: blockdev write read invalid size ...passed 00:11:53.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:53.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:53.497 Test: blockdev write read max offset ...passed 00:11:53.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:53.497 Test: blockdev writev readv 8 blocks ...passed 00:11:53.497 Test: blockdev writev readv 30 x 1block ...passed 00:11:53.497 Test: blockdev writev readv block ...passed 00:11:53.497 Test: blockdev writev readv size > 128k ...passed 00:11:53.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:53.497 Test: blockdev comparev and writev ...passed 00:11:53.497 Test: blockdev nvme passthru rw ...passed 00:11:53.497 Test: blockdev nvme passthru vendor specific ...passed 00:11:53.497 Test: blockdev nvme admin passthru ...passed 00:11:53.497 Test: blockdev copy ...passed 00:11:53.497 00:11:53.497 Run Summary: Type Total Ran Passed Failed Inactive 00:11:53.497 suites 16 16 n/a 0 0 00:11:53.497 tests 368 368 368 0 0 00:11:53.497 asserts 2224 2224 2224 0 n/a 00:11:53.497 00:11:53.497 Elapsed time = 0.717 seconds 00:11:53.497 0 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 128037 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 128037 ']' 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 128037 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 128037 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:53.497 killing process with pid 128037 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 128037' 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@965 -- # kill 128037 00:11:53.497 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@970 -- # wait 128037 00:11:54.064 11:53:52 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:11:54.064 00:11:54.064 real 0m2.023s 00:11:54.064 user 0m4.737s 00:11:54.064 sys 0m0.588s 00:11:54.064 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:54.064 11:53:52 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:11:54.064 ************************************ 00:11:54.064 END TEST bdev_bounds 00:11:54.064 ************************************ 00:11:54.064 11:53:52 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:54.064 11:53:52 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:11:54.064 11:53:52 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:54.064 11:53:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:11:54.064 ************************************ 00:11:54.064 START TEST bdev_nbd 00:11:54.064 ************************************ 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=16 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=16 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=128100 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 128100 /var/tmp/spdk-nbd.sock 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 128100 ']' 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:54.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:54.064 11:53:52 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:11:54.064 [2024-07-21 11:53:52.857252] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:54.064 [2024-07-21 11:53:52.857502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.322 [2024-07-21 11:53:53.016318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.322 [2024-07-21 11:53:53.124348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.580 [2024-07-21 11:53:53.310021] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.580 [2024-07-21 11:53:53.310168] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:54.580 [2024-07-21 11:53:53.317909] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.580 [2024-07-21 11:53:53.317980] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:54.580 [2024-07-21 11:53:53.325994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:54.580 [2024-07-21 11:53:53.326075] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:54.580 [2024-07-21 11:53:53.326133] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:54.580 [2024-07-21 11:53:53.435272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:54.580 [2024-07-21 11:53:53.435419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.580 [2024-07-21 11:53:53.435478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:54.580 [2024-07-21 11:53:53.435515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.580 [2024-07-21 11:53:53.438054] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.580 [2024-07-21 11:53:53.438112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.145 11:53:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.405 1+0 records in 00:11:55.405 1+0 records out 00:11:55.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330861 s, 12.4 MB/s 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.405 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.663 1+0 records in 00:11:55.663 1+0 records out 00:11:55.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324489 s, 12.6 MB/s 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.663 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:55.664 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.664 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:55.664 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:55.664 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.664 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.664 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.922 1+0 records in 00:11:55.922 1+0 records out 00:11:55.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381603 s, 10.7 MB/s 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:55.922 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.180 1+0 records in 00:11:56.180 1+0 records out 00:11:56.180 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302967 s, 13.5 MB/s 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.180 11:53:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.438 1+0 records in 00:11:56.438 1+0 records out 00:11:56.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458364 s, 8.9 MB/s 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.438 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:56.695 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:56.695 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.696 1+0 records in 00:11:56.696 1+0 records out 00:11:56.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000593942 s, 6.9 MB/s 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:56.696 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.953 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:56.953 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:56.953 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:56.953 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:56.953 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.211 1+0 records in 00:11:57.211 1+0 records out 00:11:57.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000653171 s, 6.3 MB/s 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.211 11:53:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd7 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd7 /proc/partitions 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.469 1+0 records in 00:11:57.469 1+0 records out 00:11:57.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526942 s, 7.8 MB/s 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.469 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd8 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd8 /proc/partitions 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.727 1+0 records in 00:11:57.727 1+0 records out 00:11:57.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000750302 s, 5.5 MB/s 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.727 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd9 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd9 /proc/partitions 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.986 1+0 records in 00:11:57.986 1+0 records out 00:11:57.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666268 s, 6.1 MB/s 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:57.986 11:53:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.553 1+0 records in 00:11:58.553 1+0 records out 00:11:58.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000998 s, 4.1 MB/s 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:58.553 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.811 1+0 records in 00:11:58.811 1+0 records out 00:11:58.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00117255 s, 3.5 MB/s 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.811 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:58.812 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.812 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:58.812 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:58.812 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:58.812 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:58.812 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.070 1+0 records in 00:11:59.070 1+0 records out 00:11:59.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000901979 s, 4.5 MB/s 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:59.070 11:53:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.328 1+0 records in 00:11:59.328 1+0 records out 00:11:59.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0020265 s, 2.0 MB/s 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:59.328 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.586 1+0 records in 00:11:59.586 1+0 records out 00:11:59.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121835 s, 3.4 MB/s 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:59.586 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:59.844 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:59.844 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:59.844 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd15 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd15 /proc/partitions 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:00.101 1+0 records in 00:12:00.101 1+0 records out 00:12:00.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141156 s, 2.9 MB/s 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:12:00.101 11:53:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd0", 00:12:00.359 "bdev_name": "Malloc0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd1", 00:12:00.359 "bdev_name": "Malloc1p0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd2", 00:12:00.359 "bdev_name": "Malloc1p1" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd3", 00:12:00.359 "bdev_name": "Malloc2p0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd4", 00:12:00.359 "bdev_name": "Malloc2p1" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd5", 00:12:00.359 "bdev_name": "Malloc2p2" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd6", 00:12:00.359 "bdev_name": "Malloc2p3" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd7", 00:12:00.359 "bdev_name": "Malloc2p4" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd8", 00:12:00.359 "bdev_name": "Malloc2p5" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd9", 00:12:00.359 "bdev_name": "Malloc2p6" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd10", 00:12:00.359 "bdev_name": "Malloc2p7" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd11", 00:12:00.359 "bdev_name": "TestPT" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd12", 00:12:00.359 "bdev_name": "raid0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd13", 00:12:00.359 "bdev_name": "concat0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd14", 00:12:00.359 "bdev_name": "raid1" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd15", 00:12:00.359 "bdev_name": "AIO0" 00:12:00.359 } 00:12:00.359 ]' 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd0", 00:12:00.359 "bdev_name": "Malloc0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd1", 00:12:00.359 "bdev_name": "Malloc1p0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd2", 00:12:00.359 "bdev_name": "Malloc1p1" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd3", 00:12:00.359 "bdev_name": "Malloc2p0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd4", 00:12:00.359 "bdev_name": "Malloc2p1" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd5", 00:12:00.359 "bdev_name": "Malloc2p2" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd6", 00:12:00.359 "bdev_name": "Malloc2p3" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd7", 00:12:00.359 "bdev_name": "Malloc2p4" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd8", 00:12:00.359 "bdev_name": "Malloc2p5" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd9", 00:12:00.359 "bdev_name": "Malloc2p6" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd10", 00:12:00.359 "bdev_name": "Malloc2p7" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd11", 00:12:00.359 "bdev_name": "TestPT" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd12", 00:12:00.359 "bdev_name": "raid0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd13", 00:12:00.359 "bdev_name": "concat0" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd14", 00:12:00.359 "bdev_name": "raid1" 00:12:00.359 }, 00:12:00.359 { 00:12:00.359 "nbd_device": "/dev/nbd15", 00:12:00.359 "bdev_name": "AIO0" 00:12:00.359 } 00:12:00.359 ]' 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.359 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.617 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:00.875 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.133 11:53:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.392 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:01.650 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.651 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.908 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.166 11:54:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.424 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.682 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.940 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.197 11:54:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.455 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.713 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.970 11:54:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.236 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:04.493 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:04.493 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:04.493 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:04.493 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.494 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.494 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:04.494 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:04.494 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.494 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:04.494 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:04.494 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:04.751 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:04.751 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:04.751 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.009 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:12:05.268 /dev/nbd0 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.268 1+0 records in 00:12:05.268 1+0 records out 00:12:05.268 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373419 s, 11.0 MB/s 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.268 11:54:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:12:05.525 /dev/nbd1 00:12:05.525 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:05.525 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:05.525 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:12:05.525 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:05.525 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:05.525 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.526 1+0 records in 00:12:05.526 1+0 records out 00:12:05.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525493 s, 7.8 MB/s 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.526 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:12:05.783 /dev/nbd10 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.783 1+0 records in 00:12:05.783 1+0 records out 00:12:05.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431847 s, 9.5 MB/s 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:05.783 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:12:06.041 /dev/nbd11 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.041 1+0 records in 00:12:06.041 1+0 records out 00:12:06.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598402 s, 6.8 MB/s 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.041 11:54:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:12:06.299 /dev/nbd12 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.299 1+0 records in 00:12:06.299 1+0 records out 00:12:06.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000730092 s, 5.6 MB/s 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.299 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:12:06.557 /dev/nbd13 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:06.557 1+0 records in 00:12:06.557 1+0 records out 00:12:06.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000709182 s, 5.8 MB/s 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:06.557 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:12:07.124 /dev/nbd14 00:12:07.124 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:12:07.124 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:12:07.124 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.125 1+0 records in 00:12:07.125 1+0 records out 00:12:07.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735275 s, 5.6 MB/s 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:12:07.125 /dev/nbd15 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd15 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd15 /proc/partitions 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.125 1+0 records in 00:12:07.125 1+0 records out 00:12:07.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668302 s, 6.1 MB/s 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:07.125 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.382 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.382 11:54:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:12:07.382 /dev/nbd2 00:12:07.382 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:12:07.382 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:12:07.382 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:12:07.382 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:07.382 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:07.382 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:07.382 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.640 1+0 records in 00:12:07.640 1+0 records out 00:12:07.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747792 s, 5.5 MB/s 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.640 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:12:07.640 /dev/nbd3 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.898 1+0 records in 00:12:07.898 1+0 records out 00:12:07.898 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00079734 s, 5.1 MB/s 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:07.898 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:12:08.156 /dev/nbd4 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.156 1+0 records in 00:12:08.156 1+0 records out 00:12:08.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00075227 s, 5.4 MB/s 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.156 11:54:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:12:08.414 /dev/nbd5 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.414 1+0 records in 00:12:08.414 1+0 records out 00:12:08.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000968983 s, 4.2 MB/s 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.414 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:12:08.672 /dev/nbd6 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.672 1+0 records in 00:12:08.672 1+0 records out 00:12:08.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00085199 s, 4.8 MB/s 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:08.672 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.930 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:08.930 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:08.930 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.930 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:08.930 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:12:08.930 /dev/nbd7 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd7 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd7 /proc/partitions 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.187 1+0 records in 00:12:09.187 1+0 records out 00:12:09.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923834 s, 4.4 MB/s 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:09.187 11:54:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:12:09.446 /dev/nbd8 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd8 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd8 /proc/partitions 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.446 1+0 records in 00:12:09.446 1+0 records out 00:12:09.446 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000911267 s, 4.5 MB/s 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:09.446 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:12:09.704 /dev/nbd9 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd9 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd9 /proc/partitions 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:09.704 1+0 records in 00:12:09.704 1+0 records out 00:12:09.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116333 s, 3.5 MB/s 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:09.704 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:09.963 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd0", 00:12:09.963 "bdev_name": "Malloc0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd1", 00:12:09.963 "bdev_name": "Malloc1p0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd10", 00:12:09.963 "bdev_name": "Malloc1p1" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd11", 00:12:09.963 "bdev_name": "Malloc2p0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd12", 00:12:09.963 "bdev_name": "Malloc2p1" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd13", 00:12:09.963 "bdev_name": "Malloc2p2" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd14", 00:12:09.963 "bdev_name": "Malloc2p3" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd15", 00:12:09.963 "bdev_name": "Malloc2p4" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd2", 00:12:09.963 "bdev_name": "Malloc2p5" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd3", 00:12:09.963 "bdev_name": "Malloc2p6" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd4", 00:12:09.963 "bdev_name": "Malloc2p7" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd5", 00:12:09.963 "bdev_name": "TestPT" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd6", 00:12:09.963 "bdev_name": "raid0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd7", 00:12:09.963 "bdev_name": "concat0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd8", 00:12:09.963 "bdev_name": "raid1" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd9", 00:12:09.963 "bdev_name": "AIO0" 00:12:09.963 } 00:12:09.963 ]' 00:12:09.963 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd0", 00:12:09.963 "bdev_name": "Malloc0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd1", 00:12:09.963 "bdev_name": "Malloc1p0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd10", 00:12:09.963 "bdev_name": "Malloc1p1" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd11", 00:12:09.963 "bdev_name": "Malloc2p0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd12", 00:12:09.963 "bdev_name": "Malloc2p1" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd13", 00:12:09.963 "bdev_name": "Malloc2p2" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd14", 00:12:09.963 "bdev_name": "Malloc2p3" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd15", 00:12:09.963 "bdev_name": "Malloc2p4" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd2", 00:12:09.963 "bdev_name": "Malloc2p5" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd3", 00:12:09.963 "bdev_name": "Malloc2p6" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd4", 00:12:09.963 "bdev_name": "Malloc2p7" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd5", 00:12:09.963 "bdev_name": "TestPT" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd6", 00:12:09.963 "bdev_name": "raid0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd7", 00:12:09.963 "bdev_name": "concat0" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd8", 00:12:09.963 "bdev_name": "raid1" 00:12:09.963 }, 00:12:09.963 { 00:12:09.963 "nbd_device": "/dev/nbd9", 00:12:09.963 "bdev_name": "AIO0" 00:12:09.963 } 00:12:09.963 ]' 00:12:09.963 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:09.963 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:09.963 /dev/nbd1 00:12:09.963 /dev/nbd10 00:12:09.963 /dev/nbd11 00:12:09.963 /dev/nbd12 00:12:09.963 /dev/nbd13 00:12:09.963 /dev/nbd14 00:12:09.963 /dev/nbd15 00:12:09.963 /dev/nbd2 00:12:09.963 /dev/nbd3 00:12:09.963 /dev/nbd4 00:12:09.963 /dev/nbd5 00:12:09.963 /dev/nbd6 00:12:09.963 /dev/nbd7 00:12:09.963 /dev/nbd8 00:12:09.963 /dev/nbd9' 00:12:09.963 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:09.963 /dev/nbd1 00:12:09.963 /dev/nbd10 00:12:09.963 /dev/nbd11 00:12:09.963 /dev/nbd12 00:12:09.963 /dev/nbd13 00:12:09.963 /dev/nbd14 00:12:09.963 /dev/nbd15 00:12:09.963 /dev/nbd2 00:12:09.963 /dev/nbd3 00:12:09.963 /dev/nbd4 00:12:09.964 /dev/nbd5 00:12:09.964 /dev/nbd6 00:12:09.964 /dev/nbd7 00:12:09.964 /dev/nbd8 00:12:09.964 /dev/nbd9' 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:09.964 256+0 records in 00:12:09.964 256+0 records out 00:12:09.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01116 s, 94.0 MB/s 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:09.964 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:10.222 256+0 records in 00:12:10.222 256+0 records out 00:12:10.222 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.175671 s, 6.0 MB/s 00:12:10.222 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.222 11:54:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:10.480 256+0 records in 00:12:10.480 256+0 records out 00:12:10.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147996 s, 7.1 MB/s 00:12:10.480 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.480 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:10.480 256+0 records in 00:12:10.480 256+0 records out 00:12:10.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154572 s, 6.8 MB/s 00:12:10.480 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.480 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:10.738 256+0 records in 00:12:10.738 256+0 records out 00:12:10.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143048 s, 7.3 MB/s 00:12:10.738 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.738 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:10.738 256+0 records in 00:12:10.738 256+0 records out 00:12:10.738 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.147901 s, 7.1 MB/s 00:12:10.738 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.738 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:10.997 256+0 records in 00:12:10.997 256+0 records out 00:12:10.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135797 s, 7.7 MB/s 00:12:10.997 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.997 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:12:10.997 256+0 records in 00:12:10.997 256+0 records out 00:12:10.997 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130863 s, 8.0 MB/s 00:12:10.997 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:10.997 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:12:11.255 256+0 records in 00:12:11.255 256+0 records out 00:12:11.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129399 s, 8.1 MB/s 00:12:11.255 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.255 11:54:09 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:12:11.255 256+0 records in 00:12:11.255 256+0 records out 00:12:11.255 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135655 s, 7.7 MB/s 00:12:11.526 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.526 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:12:11.526 256+0 records in 00:12:11.526 256+0 records out 00:12:11.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128122 s, 8.2 MB/s 00:12:11.526 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.526 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:12:11.821 256+0 records in 00:12:11.821 256+0 records out 00:12:11.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136539 s, 7.7 MB/s 00:12:11.821 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.821 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:12:11.821 256+0 records in 00:12:11.821 256+0 records out 00:12:11.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139647 s, 7.5 MB/s 00:12:11.821 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:11.821 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:12.094 256+0 records in 00:12:12.094 256+0 records out 00:12:12.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145755 s, 7.2 MB/s 00:12:12.094 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:12.094 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:12.094 256+0 records in 00:12:12.094 256+0 records out 00:12:12.094 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133368 s, 7.9 MB/s 00:12:12.094 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:12.094 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:12.353 256+0 records in 00:12:12.353 256+0 records out 00:12:12.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135359 s, 7.7 MB/s 00:12:12.353 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:10 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:12.353 256+0 records in 00:12:12.353 256+0 records out 00:12:12.353 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.187191 s, 5.6 MB/s 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.353 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.611 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.869 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.127 11:54:11 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.385 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:13.643 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.209 11:54:12 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.467 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.727 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:14.985 11:54:13 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.549 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:15.806 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.063 11:54:14 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:16.628 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:16.886 11:54:15 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:17.452 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:17.452 malloc_lvol_verify 00:12:17.710 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:17.968 12fae7d8-f11d-458c-bad5-e4e371aad8fa 00:12:17.968 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:18.226 47f1cf27-859d-4276-afaa-01f61f824ebf 00:12:18.226 11:54:16 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:18.483 /dev/nbd0 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:18.483 mke2fs 1.46.5 (30-Dec-2021) 00:12:18.483 00:12:18.483 Filesystem too small for a journal 00:12:18.483 Discarding device blocks: 0/1024 done 00:12:18.483 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:18.483 00:12:18.483 Allocating group tables: 0/1 done 00:12:18.483 Writing inode tables: 0/1 done 00:12:18.483 Writing superblocks and filesystem accounting information: 0/1 done 00:12:18.483 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.483 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 128100 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 128100 ']' 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 128100 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 128100 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:18.742 killing process with pid 128100 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 128100' 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@965 -- # kill 128100 00:12:18.742 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@970 -- # wait 128100 00:12:19.000 11:54:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:12:19.000 00:12:19.000 real 0m25.068s 00:12:19.000 user 0m35.823s 00:12:19.000 sys 0m9.173s 00:12:19.000 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.000 ************************************ 00:12:19.000 END TEST bdev_nbd 00:12:19.000 ************************************ 00:12:19.000 11:54:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:12:19.259 11:54:17 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:12:19.259 11:54:17 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:12:19.259 11:54:17 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:12:19.259 11:54:17 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:12:19.259 11:54:17 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.259 11:54:17 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.259 11:54:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:19.259 ************************************ 00:12:19.259 START TEST bdev_fio 00:12:19.259 ************************************ 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:19.259 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:12:19.259 11:54:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.259 11:54:18 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:19.259 ************************************ 00:12:19.259 START TEST bdev_fio_rw_verify 00:12:19.259 ************************************ 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:12:19.259 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:19.260 11:54:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:19.518 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:19.518 fio-3.35 00:12:19.518 Starting 16 threads 00:12:31.713 00:12:31.713 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=129265: Sun Jul 21 11:54:29 2024 00:12:31.713 read: IOPS=61.8k, BW=242MiB/s (253MB/s)(2417MiB/10004msec) 00:12:31.713 slat (usec): min=2, max=60786, avg=47.72, stdev=526.86 00:12:31.713 clat (usec): min=10, max=61133, avg=380.01, stdev=1500.35 00:12:31.713 lat (usec): min=32, max=61165, avg=427.74, stdev=1589.66 00:12:31.713 clat percentiles (usec): 00:12:31.713 | 50.000th=[ 231], 99.000th=[ 1565], 99.900th=[16909], 99.990th=[30016], 00:12:31.713 | 99.999th=[47973] 00:12:31.713 write: IOPS=97.9k, BW=383MiB/s (401MB/s)(3789MiB/9905msec); 0 zone resets 00:12:31.713 slat (usec): min=4, max=60083, avg=82.59, stdev=773.89 00:12:31.713 clat (usec): min=12, max=60700, avg=474.59, stdev=1732.60 00:12:31.713 lat (usec): min=42, max=60754, avg=557.18, stdev=1897.11 00:12:31.713 clat percentiles (usec): 00:12:31.713 | 50.000th=[ 281], 99.000th=[10159], 99.900th=[21890], 99.990th=[38536], 00:12:31.713 | 99.999th=[55313] 00:12:31.713 bw ( KiB/s): min=227151, max=619992, per=98.96%, avg=387668.63, stdev=6498.18, samples=304 00:12:31.713 iops : min=56787, max=154998, avg=96917.21, stdev=1624.54, samples=304 00:12:31.713 lat (usec) : 20=0.01%, 50=0.18%, 100=4.92%, 250=42.48%, 500=47.39% 00:12:31.713 lat (usec) : 750=3.16%, 1000=0.27% 00:12:31.713 lat (msec) : 2=0.36%, 4=0.10%, 10=0.21%, 20=0.81%, 50=0.12% 00:12:31.713 lat (msec) : 100=0.01% 00:12:31.713 cpu : usr=55.99%, sys=2.17%, ctx=224261, majf=2, minf=77683 00:12:31.713 IO depths : 1=11.0%, 2=23.4%, 4=52.3%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:31.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.713 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:31.713 issued rwts: total=618625,970055,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:31.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:31.713 00:12:31.713 Run status group 0 (all jobs): 00:12:31.713 READ: bw=242MiB/s (253MB/s), 242MiB/s-242MiB/s (253MB/s-253MB/s), io=2417MiB (2534MB), run=10004-10004msec 00:12:31.713 WRITE: bw=383MiB/s (401MB/s), 383MiB/s-383MiB/s (401MB/s-401MB/s), io=3789MiB (3973MB), run=9905-9905msec 00:12:31.713 ----------------------------------------------------- 00:12:31.713 Suppressions used: 00:12:31.713 count bytes template 00:12:31.713 16 140 /usr/src/fio/parse.c 00:12:31.713 12002 1152192 /usr/src/fio/iolog.c 00:12:31.713 1 904 libcrypto.so 00:12:31.713 ----------------------------------------------------- 00:12:31.713 00:12:31.713 00:12:31.713 real 0m12.148s 00:12:31.713 user 1m32.691s 00:12:31.714 sys 0m4.433s 00:12:31.714 11:54:30 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:31.714 ************************************ 00:12:31.714 END TEST bdev_fio_rw_verify 00:12:31.714 ************************************ 00:12:31.714 11:54:30 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:12:31.714 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:31.715 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "57604208-ec50-4e95-b027-42b1e5b8cc95"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "57604208-ec50-4e95-b027-42b1e5b8cc95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5fbbd973-6c68-58ff-bcb0-f4d74b17dc1a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5fbbd973-6c68-58ff-bcb0-f4d74b17dc1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ffee020e-c9bd-56ab-be9f-32685e3c5d8c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ffee020e-c9bd-56ab-be9f-32685e3c5d8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "66d13f1c-b62e-5885-b0f4-473c6a5097fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66d13f1c-b62e-5885-b0f4-473c6a5097fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "3d4cff75-ba0d-5e76-903c-3fd4a8b623b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d4cff75-ba0d-5e76-903c-3fd4a8b623b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "133ff6dd-0f0c-5f3d-8fa1-b3058481f83e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "133ff6dd-0f0c-5f3d-8fa1-b3058481f83e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ba4a44f0-0599-54b6-94c3-22758c2839de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ba4a44f0-0599-54b6-94c3-22758c2839de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b4845a15-2e4e-5dd3-a38f-971d43bc682f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4845a15-2e4e-5dd3-a38f-971d43bc682f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "5722021f-d286-51a3-9df5-f79953a7e262"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5722021f-d286-51a3-9df5-f79953a7e262",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "324b904d-99c1-589f-b514-090b3a2513ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "324b904d-99c1-589f-b514-090b3a2513ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d67b4f9d-407f-5b45-b751-1205a5eee490"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d67b4f9d-407f-5b45-b751-1205a5eee490",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "c6ec2d5c-d72d-5a33-818e-edeac579c00a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c6ec2d5c-d72d-5a33-818e-edeac579c00a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "02944355-d0ad-474a-a774-e3e95ea43665"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02944355-d0ad-474a-a774-e3e95ea43665",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "02944355-d0ad-474a-a774-e3e95ea43665",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c60f576d-08b3-40b4-86b7-cd812316eaf3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "83ef698e-5720-49e1-8a2c-5ec45a539984",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "bcefb08f-2d86-4673-aef5-b7e9417ece8f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bcefb08f-2d86-4673-aef5-b7e9417ece8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bcefb08f-2d86-4673-aef5-b7e9417ece8f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "98d18dd4-f5bd-4962-baf9-71e444eebf6f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "72ecb791-bcaf-449a-871c-aa20bf252e5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "8b153573-42de-49bf-a29d-a8506eab0c67"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8b153573-42de-49bf-a29d-a8506eab0c67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8b153573-42de-49bf-a29d-a8506eab0c67",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b518bc90-1777-4e30-b5ea-4c07f99a66cb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77a8a482-2f14-4907-b944-1c0dbd13f1bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "184a5368-019c-4651-9d54-30b5bd99faf2"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "184a5368-019c-4651-9d54-30b5bd99faf2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:31.715 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:12:31.715 Malloc1p0 00:12:31.715 Malloc1p1 00:12:31.715 Malloc2p0 00:12:31.715 Malloc2p1 00:12:31.715 Malloc2p2 00:12:31.715 Malloc2p3 00:12:31.715 Malloc2p4 00:12:31.715 Malloc2p5 00:12:31.715 Malloc2p6 00:12:31.715 Malloc2p7 00:12:31.715 TestPT 00:12:31.715 raid0 00:12:31.715 concat0 ]] 00:12:31.715 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "57604208-ec50-4e95-b027-42b1e5b8cc95"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "57604208-ec50-4e95-b027-42b1e5b8cc95",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "5fbbd973-6c68-58ff-bcb0-f4d74b17dc1a"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "5fbbd973-6c68-58ff-bcb0-f4d74b17dc1a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "ffee020e-c9bd-56ab-be9f-32685e3c5d8c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "ffee020e-c9bd-56ab-be9f-32685e3c5d8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "66d13f1c-b62e-5885-b0f4-473c6a5097fe"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "66d13f1c-b62e-5885-b0f4-473c6a5097fe",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "3d4cff75-ba0d-5e76-903c-3fd4a8b623b8"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3d4cff75-ba0d-5e76-903c-3fd4a8b623b8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "133ff6dd-0f0c-5f3d-8fa1-b3058481f83e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "133ff6dd-0f0c-5f3d-8fa1-b3058481f83e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "ba4a44f0-0599-54b6-94c3-22758c2839de"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ba4a44f0-0599-54b6-94c3-22758c2839de",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "b4845a15-2e4e-5dd3-a38f-971d43bc682f"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b4845a15-2e4e-5dd3-a38f-971d43bc682f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "5722021f-d286-51a3-9df5-f79953a7e262"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "5722021f-d286-51a3-9df5-f79953a7e262",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "324b904d-99c1-589f-b514-090b3a2513ef"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "324b904d-99c1-589f-b514-090b3a2513ef",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "d67b4f9d-407f-5b45-b751-1205a5eee490"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d67b4f9d-407f-5b45-b751-1205a5eee490",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "c6ec2d5c-d72d-5a33-818e-edeac579c00a"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c6ec2d5c-d72d-5a33-818e-edeac579c00a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "02944355-d0ad-474a-a774-e3e95ea43665"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "02944355-d0ad-474a-a774-e3e95ea43665",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "02944355-d0ad-474a-a774-e3e95ea43665",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "c60f576d-08b3-40b4-86b7-cd812316eaf3",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "83ef698e-5720-49e1-8a2c-5ec45a539984",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "bcefb08f-2d86-4673-aef5-b7e9417ece8f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bcefb08f-2d86-4673-aef5-b7e9417ece8f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "bcefb08f-2d86-4673-aef5-b7e9417ece8f",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "98d18dd4-f5bd-4962-baf9-71e444eebf6f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "72ecb791-bcaf-449a-871c-aa20bf252e5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "8b153573-42de-49bf-a29d-a8506eab0c67"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "8b153573-42de-49bf-a29d-a8506eab0c67",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "8b153573-42de-49bf-a29d-a8506eab0c67",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b518bc90-1777-4e30-b5ea-4c07f99a66cb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77a8a482-2f14-4907-b944-1c0dbd13f1bc",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "184a5368-019c-4651-9d54-30b5bd99faf2"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "184a5368-019c-4651-9d54-30b5bd99faf2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:31.716 11:54:30 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:31.716 ************************************ 00:12:31.716 START TEST bdev_fio_trim 00:12:31.716 ************************************ 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1335 -- # local sanitizers 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # shift 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local asan_lib= 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # grep libasan 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # break 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:31.716 11:54:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:31.716 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.716 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.716 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:31.717 fio-3.35 00:12:31.717 Starting 14 threads 00:12:43.917 00:12:43.917 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=129463: Sun Jul 21 11:54:41 2024 00:12:43.917 write: IOPS=112k, BW=438MiB/s (459MB/s)(4384MiB/10006msec); 0 zone resets 00:12:43.917 slat (usec): min=3, max=28109, avg=45.94, stdev=432.14 00:12:43.917 clat (usec): min=24, max=36518, avg=303.15, stdev=1127.53 00:12:43.917 lat (usec): min=42, max=36561, avg=349.10, stdev=1206.93 00:12:43.917 clat percentiles (usec): 00:12:43.917 | 50.000th=[ 208], 99.000th=[ 453], 99.900th=[16319], 99.990th=[20317], 00:12:43.917 | 99.999th=[28181] 00:12:43.917 bw ( KiB/s): min=311849, max=633008, per=100.00%, avg=449044.89, stdev=7951.72, samples=266 00:12:43.917 iops : min=77962, max=158252, avg=112261.00, stdev=1987.93, samples=266 00:12:43.917 trim: IOPS=112k, BW=438MiB/s (459MB/s)(4384MiB/10006msec); 0 zone resets 00:12:43.917 slat (usec): min=4, max=24090, avg=30.49, stdev=359.48 00:12:43.917 clat (usec): min=21, max=36561, avg=344.52, stdev=1201.63 00:12:43.917 lat (usec): min=33, max=36575, avg=375.01, stdev=1253.86 00:12:43.917 clat percentiles (usec): 00:12:43.917 | 50.000th=[ 239], 99.000th=[ 510], 99.900th=[16319], 99.990th=[20317], 00:12:43.917 | 99.999th=[28181] 00:12:43.917 bw ( KiB/s): min=311857, max=633000, per=100.00%, avg=449044.89, stdev=7951.62, samples=266 00:12:43.917 iops : min=77964, max=158250, avg=112261.11, stdev=1987.90, samples=266 00:12:43.917 lat (usec) : 50=0.12%, 100=3.79%, 250=56.81%, 500=38.37%, 750=0.19% 00:12:43.917 lat (usec) : 1000=0.02% 00:12:43.917 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.65%, 50=0.01% 00:12:43.917 cpu : usr=68.77%, sys=0.58%, ctx=165916, majf=0, minf=9073 00:12:43.917 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:43.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.917 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.917 issued rwts: total=0,1122305,1122308,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.917 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:43.917 00:12:43.917 Run status group 0 (all jobs): 00:12:43.917 WRITE: bw=438MiB/s (459MB/s), 438MiB/s-438MiB/s (459MB/s-459MB/s), io=4384MiB (4597MB), run=10006-10006msec 00:12:43.917 TRIM: bw=438MiB/s (459MB/s), 438MiB/s-438MiB/s (459MB/s-459MB/s), io=4384MiB (4597MB), run=10006-10006msec 00:12:43.917 ----------------------------------------------------- 00:12:43.917 Suppressions used: 00:12:43.917 count bytes template 00:12:43.917 14 129 /usr/src/fio/parse.c 00:12:43.917 1 904 libcrypto.so 00:12:43.917 ----------------------------------------------------- 00:12:43.917 00:12:43.917 00:12:43.917 real 0m11.724s 00:12:43.917 user 1m38.997s 00:12:43.917 sys 0m1.709s 00:12:43.917 11:54:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.917 11:54:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 ************************************ 00:12:43.917 END TEST bdev_fio_trim 00:12:43.917 ************************************ 00:12:43.917 11:54:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:12:43.917 11:54:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:43.917 /home/vagrant/spdk_repo/spdk 00:12:43.917 11:54:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:12:43.917 11:54:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:12:43.917 00:12:43.917 real 0m24.224s 00:12:43.917 user 3m11.891s 00:12:43.917 sys 0m6.250s 00:12:43.917 11:54:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.917 ************************************ 00:12:43.917 END TEST bdev_fio 00:12:43.917 ************************************ 00:12:43.917 11:54:42 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 11:54:42 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:43.917 11:54:42 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:43.917 11:54:42 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:12:43.917 11:54:42 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.917 11:54:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:43.917 ************************************ 00:12:43.917 START TEST bdev_verify 00:12:43.917 ************************************ 00:12:43.917 11:54:42 blockdev_general.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:43.917 [2024-07-21 11:54:42.244918] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:43.917 [2024-07-21 11:54:42.245153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129631 ] 00:12:43.917 [2024-07-21 11:54:42.406742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:43.917 [2024-07-21 11:54:42.495319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.917 [2024-07-21 11:54:42.495338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.917 [2024-07-21 11:54:42.669366] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:43.917 [2024-07-21 11:54:42.669552] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:43.917 [2024-07-21 11:54:42.677277] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:43.917 [2024-07-21 11:54:42.677357] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:43.917 [2024-07-21 11:54:42.685365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:43.917 [2024-07-21 11:54:42.685471] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:43.917 [2024-07-21 11:54:42.685547] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:44.176 [2024-07-21 11:54:42.794406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:44.176 [2024-07-21 11:54:42.794606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.176 [2024-07-21 11:54:42.794699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:44.176 [2024-07-21 11:54:42.794737] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.176 [2024-07-21 11:54:42.797947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.176 [2024-07-21 11:54:42.798016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:44.433 Running I/O for 5 seconds... 00:12:49.696 00:12:49.696 Latency(us) 00:12:49.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.696 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.696 Verification LBA range: start 0x0 length 0x1000 00:12:49.696 Malloc0 : 5.15 1267.57 4.95 0.00 0.00 100869.52 476.63 316479.30 00:12:49.696 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.696 Verification LBA range: start 0x1000 length 0x1000 00:12:49.697 Malloc0 : 5.16 1066.34 4.17 0.00 0.00 119873.05 763.35 335544.32 00:12:49.697 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x800 00:12:49.697 Malloc1p0 : 5.22 662.67 2.59 0.00 0.00 192592.78 3276.80 168725.41 00:12:49.697 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x800 length 0x800 00:12:49.697 Malloc1p0 : 5.23 562.39 2.20 0.00 0.00 226826.50 3515.11 176351.42 00:12:49.697 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x800 00:12:49.697 Malloc1p1 : 5.22 662.41 2.59 0.00 0.00 192298.09 2800.17 168725.41 00:12:49.697 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x800 length 0x800 00:12:49.697 Malloc1p1 : 5.24 562.02 2.20 0.00 0.00 226477.61 2964.01 177304.67 00:12:49.697 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p0 : 5.22 662.14 2.59 0.00 0.00 192044.57 2621.44 169678.66 00:12:49.697 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p0 : 5.24 561.36 2.19 0.00 0.00 226311.96 3112.96 176351.42 00:12:49.697 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p1 : 5.22 661.89 2.59 0.00 0.00 191787.04 2651.23 169678.66 00:12:49.697 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p1 : 5.25 560.97 2.19 0.00 0.00 226031.38 4319.42 174444.92 00:12:49.697 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p2 : 5.22 661.65 2.58 0.00 0.00 191526.12 4021.53 168725.41 00:12:49.697 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p2 : 5.25 560.68 2.19 0.00 0.00 225621.88 5510.98 170631.91 00:12:49.697 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p3 : 5.23 661.39 2.58 0.00 0.00 191211.85 5004.57 164912.41 00:12:49.697 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p3 : 5.26 560.07 2.19 0.00 0.00 225277.75 4021.53 168725.41 00:12:49.697 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p4 : 5.23 661.14 2.58 0.00 0.00 190855.14 3589.59 163959.16 00:12:49.697 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p4 : 5.26 559.89 2.19 0.00 0.00 224835.15 2993.80 167772.16 00:12:49.697 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p5 : 5.23 660.88 2.58 0.00 0.00 190559.01 2666.12 163005.91 00:12:49.697 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p5 : 5.26 559.71 2.19 0.00 0.00 224468.02 3083.17 166818.91 00:12:49.697 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p6 : 5.23 660.64 2.58 0.00 0.00 190305.35 2740.60 162052.65 00:12:49.697 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p6 : 5.26 559.52 2.19 0.00 0.00 224093.89 2442.71 165865.66 00:12:49.697 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x200 00:12:49.697 Malloc2p7 : 5.23 660.38 2.58 0.00 0.00 190042.33 3157.64 161099.40 00:12:49.697 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x200 length 0x200 00:12:49.697 Malloc2p7 : 5.26 559.34 2.18 0.00 0.00 223775.85 3023.59 166818.91 00:12:49.697 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x1000 00:12:49.697 TestPT : 5.25 658.27 2.57 0.00 0.00 190098.67 6881.28 159192.90 00:12:49.697 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x1000 length 0x1000 00:12:49.697 TestPT : 5.27 558.30 2.18 0.00 0.00 223789.76 6970.65 167772.16 00:12:49.697 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x2000 00:12:49.697 raid0 : 5.24 659.78 2.58 0.00 0.00 189465.80 2383.13 155379.90 00:12:49.697 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x2000 length 0x2000 00:12:49.697 raid0 : 5.27 558.93 2.18 0.00 0.00 223118.76 3783.21 160146.15 00:12:49.697 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x2000 00:12:49.697 concat0 : 5.24 659.04 2.57 0.00 0.00 189336.42 3321.48 152520.15 00:12:49.697 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x2000 length 0x2000 00:12:49.697 concat0 : 5.27 558.75 2.18 0.00 0.00 222691.44 3813.00 155379.90 00:12:49.697 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x1000 00:12:49.697 raid1 : 5.25 658.63 2.57 0.00 0.00 189037.15 4885.41 146800.64 00:12:49.697 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x1000 length 0x1000 00:12:49.697 raid1 : 5.27 558.56 2.18 0.00 0.00 222285.19 3217.22 152520.15 00:12:49.697 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x0 length 0x4e2 00:12:49.697 AIO0 : 5.25 658.12 2.57 0.00 0.00 188303.28 3038.49 143940.89 00:12:49.697 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:49.697 Verification LBA range: start 0x4e2 length 0x4e2 00:12:49.697 AIO0 : 5.27 558.21 2.18 0.00 0.00 221484.23 2293.76 170631.91 00:12:49.697 =================================================================================================================== 00:12:49.697 Total : 20641.62 80.63 0.00 0.00 195435.38 476.63 335544.32 00:12:50.263 00:12:50.263 real 0m6.721s 00:12:50.263 user 0m11.574s 00:12:50.263 sys 0m0.652s 00:12:50.263 11:54:48 blockdev_general.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:50.263 ************************************ 00:12:50.263 END TEST bdev_verify 00:12:50.263 ************************************ 00:12:50.264 11:54:48 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:12:50.264 11:54:48 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:50.264 11:54:48 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:12:50.264 11:54:48 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:50.264 11:54:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:50.264 ************************************ 00:12:50.264 START TEST bdev_verify_big_io 00:12:50.264 ************************************ 00:12:50.264 11:54:48 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:50.264 [2024-07-21 11:54:49.021760] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:50.264 [2024-07-21 11:54:49.022027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129741 ] 00:12:50.521 [2024-07-21 11:54:49.193094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:50.521 [2024-07-21 11:54:49.280569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.521 [2024-07-21 11:54:49.280572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.780 [2024-07-21 11:54:49.458485] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:50.780 [2024-07-21 11:54:49.458711] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:50.780 [2024-07-21 11:54:49.466428] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:50.780 [2024-07-21 11:54:49.466509] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:50.780 [2024-07-21 11:54:49.474496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:50.780 [2024-07-21 11:54:49.474614] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:50.780 [2024-07-21 11:54:49.474686] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:50.780 [2024-07-21 11:54:49.586380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:50.780 [2024-07-21 11:54:49.586529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.780 [2024-07-21 11:54:49.586657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:50.780 [2024-07-21 11:54:49.586714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.780 [2024-07-21 11:54:49.589844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.780 [2024-07-21 11:54:49.589905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:51.039 [2024-07-21 11:54:49.793758] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.795187] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.797173] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.799199] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.800504] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.802445] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.803816] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.805764] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.807142] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.809178] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.810457] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.812504] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.813803] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.815880] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.817864] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.819211] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:51.039 [2024-07-21 11:54:49.854103] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:51.039 [2024-07-21 11:54:49.857168] bdevperf.c:1818:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:51.039 Running I/O for 5 seconds... 00:12:57.598 00:12:57.598 Latency(us) 00:12:57.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.598 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:57.598 Verification LBA range: start 0x0 length 0x100 00:12:57.598 Malloc0 : 5.57 344.93 21.56 0.00 0.00 367142.82 696.32 1334551.27 00:12:57.599 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x100 length 0x100 00:12:57.599 Malloc0 : 5.62 273.52 17.10 0.00 0.00 460601.71 770.79 1532827.46 00:12:57.599 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x80 00:12:57.599 Malloc1p0 : 5.68 191.49 11.97 0.00 0.00 641388.11 2889.54 1258291.20 00:12:57.599 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x80 length 0x80 00:12:57.599 Malloc1p0 : 6.18 51.75 3.23 0.00 0.00 2293207.13 1921.40 3629979.46 00:12:57.599 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x80 00:12:57.599 Malloc1p1 : 5.84 65.79 4.11 0.00 0.00 1825891.11 1712.87 2714858.59 00:12:57.599 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x80 length 0x80 00:12:57.599 Malloc1p1 : 6.18 51.74 3.23 0.00 0.00 2237454.02 1496.90 3507963.35 00:12:57.599 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p0 : 5.63 51.19 3.20 0.00 0.00 588118.41 685.15 907494.87 00:12:57.599 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p0 : 5.77 38.83 2.43 0.00 0.00 748720.90 692.60 1265917.21 00:12:57.599 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p1 : 5.63 51.18 3.20 0.00 0.00 585393.53 808.03 892242.85 00:12:57.599 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p1 : 5.85 41.05 2.57 0.00 0.00 709355.98 688.87 1243039.19 00:12:57.599 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p2 : 5.63 51.16 3.20 0.00 0.00 582848.51 625.57 880803.84 00:12:57.599 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p2 : 5.85 41.04 2.57 0.00 0.00 704512.94 651.64 1227787.17 00:12:57.599 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p3 : 5.63 51.15 3.20 0.00 0.00 580366.19 603.23 865551.83 00:12:57.599 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p3 : 5.85 41.03 2.56 0.00 0.00 700453.32 714.94 1204909.15 00:12:57.599 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p4 : 5.63 51.14 3.20 0.00 0.00 577817.28 610.68 850299.81 00:12:57.599 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p4 : 5.85 41.02 2.56 0.00 0.00 696161.85 752.17 1189657.13 00:12:57.599 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p5 : 5.63 51.13 3.20 0.00 0.00 575357.43 618.12 838860.80 00:12:57.599 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p5 : 5.85 41.01 2.56 0.00 0.00 691222.47 711.21 1159153.11 00:12:57.599 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p6 : 5.63 51.11 3.19 0.00 0.00 573203.23 633.02 827421.79 00:12:57.599 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p6 : 5.85 41.00 2.56 0.00 0.00 686119.42 830.37 1143901.09 00:12:57.599 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x20 00:12:57.599 Malloc2p7 : 5.64 51.10 3.19 0.00 0.00 570417.90 625.57 812169.77 00:12:57.599 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x20 length 0x20 00:12:57.599 Malloc2p7 : 5.86 40.99 2.56 0.00 0.00 681827.86 789.41 1121023.07 00:12:57.599 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x100 00:12:57.599 TestPT : 5.84 63.72 3.98 0.00 0.00 1787387.74 71017.19 2303054.20 00:12:57.599 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x100 length 0x100 00:12:57.599 TestPT : 6.03 55.75 3.48 0.00 0.00 1970717.17 64344.44 2974142.84 00:12:57.599 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x200 00:12:57.599 raid0 : 5.87 68.15 4.26 0.00 0.00 1643293.85 1504.35 2455574.34 00:12:57.599 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x200 length 0x200 00:12:57.599 raid0 : 6.03 64.67 4.04 0.00 0.00 1668968.56 1660.74 3126662.98 00:12:57.599 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x200 00:12:57.599 concat0 : 5.94 72.76 4.55 0.00 0.00 1511392.70 1333.06 2364062.25 00:12:57.599 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x200 length 0x200 00:12:57.599 concat0 : 6.16 83.10 5.19 0.00 0.00 1275558.64 1839.48 3004646.87 00:12:57.599 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x100 00:12:57.599 raid1 : 5.87 81.74 5.11 0.00 0.00 1338398.74 2085.24 2272550.17 00:12:57.599 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x100 length 0x100 00:12:57.599 raid1 : 6.19 90.49 5.66 0.00 0.00 1144941.82 2144.81 2897882.76 00:12:57.599 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x0 length 0x4e 00:12:57.599 AIO0 : 5.94 98.62 6.16 0.00 0.00 669142.38 1050.07 1311673.25 00:12:57.599 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:57.599 Verification LBA range: start 0x4e length 0x4e 00:12:57.599 AIO0 : 6.25 98.20 6.14 0.00 0.00 631822.07 1593.72 1708225.63 00:12:57.599 =================================================================================================================== 00:12:57.599 Total : 2491.55 155.72 0.00 0.00 894799.73 603.23 3629979.46 00:12:58.166 00:12:58.166 real 0m7.765s 00:12:58.166 user 0m14.066s 00:12:58.166 sys 0m0.614s 00:12:58.166 ************************************ 00:12:58.166 END TEST bdev_verify_big_io 00:12:58.166 ************************************ 00:12:58.166 11:54:56 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:58.166 11:54:56 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.166 11:54:56 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.166 11:54:56 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:12:58.166 11:54:56 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:58.166 11:54:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:12:58.166 ************************************ 00:12:58.166 START TEST bdev_write_zeroes 00:12:58.166 ************************************ 00:12:58.166 11:54:56 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:58.166 [2024-07-21 11:54:56.839299] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:58.166 [2024-07-21 11:54:56.839544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129861 ] 00:12:58.166 [2024-07-21 11:54:57.002616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.425 [2024-07-21 11:54:57.087025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.425 [2024-07-21 11:54:57.258945] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.425 [2024-07-21 11:54:57.259106] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:58.425 [2024-07-21 11:54:57.266876] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.425 [2024-07-21 11:54:57.266931] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:58.425 [2024-07-21 11:54:57.274922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.425 [2024-07-21 11:54:57.275011] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:58.425 [2024-07-21 11:54:57.275064] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:58.683 [2024-07-21 11:54:57.380090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:58.683 [2024-07-21 11:54:57.380252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.683 [2024-07-21 11:54:57.380301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:12:58.683 [2024-07-21 11:54:57.380334] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.683 [2024-07-21 11:54:57.382850] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.683 [2024-07-21 11:54:57.382916] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:58.940 Running I/O for 1 seconds... 00:12:59.876 00:12:59.876 Latency(us) 00:12:59.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:59.876 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc0 : 1.04 5786.01 22.60 0.00 0.00 22106.68 748.45 40274.85 00:12:59.876 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc1p0 : 1.04 5779.26 22.58 0.00 0.00 22093.11 934.63 39321.60 00:12:59.876 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc1p1 : 1.04 5772.48 22.55 0.00 0.00 22076.60 938.36 38368.35 00:12:59.876 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p0 : 1.04 5766.57 22.53 0.00 0.00 22052.32 975.59 37415.10 00:12:59.876 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p1 : 1.04 5760.54 22.50 0.00 0.00 22025.97 938.36 36461.85 00:12:59.876 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p2 : 1.05 5754.39 22.48 0.00 0.00 22013.28 927.19 35508.60 00:12:59.876 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p3 : 1.05 5748.24 22.45 0.00 0.00 21985.05 968.15 34555.35 00:12:59.876 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p4 : 1.05 5742.47 22.43 0.00 0.00 21963.09 930.91 33602.09 00:12:59.876 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p5 : 1.05 5736.08 22.41 0.00 0.00 21946.51 1005.38 32410.53 00:12:59.876 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p6 : 1.05 5729.84 22.38 0.00 0.00 21925.19 930.91 31457.28 00:12:59.876 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 Malloc2p7 : 1.05 5724.10 22.36 0.00 0.00 21902.14 975.59 30384.87 00:12:59.876 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 TestPT : 1.05 5718.10 22.34 0.00 0.00 21875.84 942.08 29431.62 00:12:59.876 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 raid0 : 1.05 5711.00 22.31 0.00 0.00 21849.66 1690.53 27644.28 00:12:59.876 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 concat0 : 1.05 5703.94 22.28 0.00 0.00 21795.36 1720.32 25976.09 00:12:59.876 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 raid1 : 1.06 5695.79 22.25 0.00 0.00 21743.09 2487.39 25737.77 00:12:59.876 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:59.876 AIO0 : 1.06 5787.86 22.61 0.00 0.00 21298.21 443.11 26095.24 00:12:59.876 =================================================================================================================== 00:12:59.876 Total : 91916.68 359.05 0.00 0.00 21915.03 443.11 40274.85 00:13:00.457 00:13:00.457 real 0m2.433s 00:13:00.457 user 0m1.795s 00:13:00.457 sys 0m0.444s 00:13:00.457 11:54:59 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.457 11:54:59 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:00.457 ************************************ 00:13:00.457 END TEST bdev_write_zeroes 00:13:00.457 ************************************ 00:13:00.457 11:54:59 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.457 11:54:59 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:00.457 11:54:59 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.457 11:54:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:00.457 ************************************ 00:13:00.457 START TEST bdev_json_nonenclosed 00:13:00.457 ************************************ 00:13:00.457 11:54:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.714 [2024-07-21 11:54:59.335010] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:00.714 [2024-07-21 11:54:59.335262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129911 ] 00:13:00.714 [2024-07-21 11:54:59.503809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.972 [2024-07-21 11:54:59.591658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.972 [2024-07-21 11:54:59.591854] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:00.972 [2024-07-21 11:54:59.591910] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:00.972 [2024-07-21 11:54:59.591943] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.972 00:13:00.972 real 0m0.473s 00:13:00.972 user 0m0.243s 00:13:00.972 sys 0m0.130s 00:13:00.972 11:54:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:00.972 11:54:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:00.972 ************************************ 00:13:00.972 END TEST bdev_json_nonenclosed 00:13:00.972 ************************************ 00:13:00.972 11:54:59 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.972 11:54:59 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:00.972 11:54:59 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:00.972 11:54:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:00.972 ************************************ 00:13:00.972 START TEST bdev_json_nonarray 00:13:00.972 ************************************ 00:13:00.972 11:54:59 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:01.229 [2024-07-21 11:54:59.859866] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:01.229 [2024-07-21 11:54:59.860104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129949 ] 00:13:01.229 [2024-07-21 11:55:00.022655] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.498 [2024-07-21 11:55:00.113905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.498 [2024-07-21 11:55:00.114091] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:01.498 [2024-07-21 11:55:00.114140] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:01.498 [2024-07-21 11:55:00.114182] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.498 00:13:01.498 real 0m0.436s 00:13:01.498 user 0m0.212s 00:13:01.498 sys 0m0.124s 00:13:01.498 11:55:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:01.498 11:55:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:01.498 ************************************ 00:13:01.498 END TEST bdev_json_nonarray 00:13:01.498 ************************************ 00:13:01.498 11:55:00 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:13:01.498 11:55:00 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:13:01.498 11:55:00 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:01.498 11:55:00 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:01.498 11:55:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:01.498 ************************************ 00:13:01.498 START TEST bdev_qos 00:13:01.498 ************************************ 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1121 -- # qos_test_suite '' 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=129971 00:13:01.498 Process qos testing pid: 129971 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 129971' 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 129971 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- common/autotest_common.sh@827 -- # '[' -z 129971 ']' 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:01.498 11:55:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:01.498 [2024-07-21 11:55:00.356111] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:01.498 [2024-07-21 11:55:00.357107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129971 ] 00:13:01.756 [2024-07-21 11:55:00.534164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.013 [2024-07-21 11:55:00.632203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@860 -- # return 0 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.578 Malloc_0 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_0 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.578 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.578 [ 00:13:02.578 { 00:13:02.578 "name": "Malloc_0", 00:13:02.578 "aliases": [ 00:13:02.578 "bf390cdc-2b7a-4f36-bae8-48d23e9acabf" 00:13:02.578 ], 00:13:02.578 "product_name": "Malloc disk", 00:13:02.578 "block_size": 512, 00:13:02.578 "num_blocks": 262144, 00:13:02.578 "uuid": "bf390cdc-2b7a-4f36-bae8-48d23e9acabf", 00:13:02.578 "assigned_rate_limits": { 00:13:02.578 "rw_ios_per_sec": 0, 00:13:02.578 "rw_mbytes_per_sec": 0, 00:13:02.578 "r_mbytes_per_sec": 0, 00:13:02.578 "w_mbytes_per_sec": 0 00:13:02.578 }, 00:13:02.578 "claimed": false, 00:13:02.578 "zoned": false, 00:13:02.578 "supported_io_types": { 00:13:02.578 "read": true, 00:13:02.578 "write": true, 00:13:02.578 "unmap": true, 00:13:02.579 "write_zeroes": true, 00:13:02.579 "flush": true, 00:13:02.579 "reset": true, 00:13:02.579 "compare": false, 00:13:02.579 "compare_and_write": false, 00:13:02.579 "abort": true, 00:13:02.579 "nvme_admin": false, 00:13:02.579 "nvme_io": false 00:13:02.579 }, 00:13:02.579 "memory_domains": [ 00:13:02.579 { 00:13:02.579 "dma_device_id": "system", 00:13:02.579 "dma_device_type": 1 00:13:02.579 }, 00:13:02.579 { 00:13:02.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.579 "dma_device_type": 2 00:13:02.579 } 00:13:02.579 ], 00:13:02.579 "driver_specific": {} 00:13:02.579 } 00:13:02.579 ] 00:13:02.579 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.579 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:13:02.579 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:02.579 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.579 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.838 Null_1 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@895 -- # local bdev_name=Null_1 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local i 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:02.838 [ 00:13:02.838 { 00:13:02.838 "name": "Null_1", 00:13:02.838 "aliases": [ 00:13:02.838 "e7826f9c-9860-46bd-a038-08885bc1f540" 00:13:02.838 ], 00:13:02.838 "product_name": "Null disk", 00:13:02.838 "block_size": 512, 00:13:02.838 "num_blocks": 262144, 00:13:02.838 "uuid": "e7826f9c-9860-46bd-a038-08885bc1f540", 00:13:02.838 "assigned_rate_limits": { 00:13:02.838 "rw_ios_per_sec": 0, 00:13:02.838 "rw_mbytes_per_sec": 0, 00:13:02.838 "r_mbytes_per_sec": 0, 00:13:02.838 "w_mbytes_per_sec": 0 00:13:02.838 }, 00:13:02.838 "claimed": false, 00:13:02.838 "zoned": false, 00:13:02.838 "supported_io_types": { 00:13:02.838 "read": true, 00:13:02.838 "write": true, 00:13:02.838 "unmap": false, 00:13:02.838 "write_zeroes": true, 00:13:02.838 "flush": false, 00:13:02.838 "reset": true, 00:13:02.838 "compare": false, 00:13:02.838 "compare_and_write": false, 00:13:02.838 "abort": true, 00:13:02.838 "nvme_admin": false, 00:13:02.838 "nvme_io": false 00:13:02.838 }, 00:13:02.838 "driver_specific": {} 00:13:02.838 } 00:13:02.838 ] 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # return 0 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:02.838 11:55:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:13:02.838 Running I/O for 60 seconds... 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 72455.20 289820.81 0.00 0.00 292864.00 0.00 0.00 ' 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=72455.20 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 72455 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=72455 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=18000 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 18000 -gt 1000 ']' 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 18000 Malloc_0 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 18000 IOPS Malloc_0 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:08.114 11:55:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:08.114 ************************************ 00:13:08.114 START TEST bdev_qos_iops 00:13:08.114 ************************************ 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1121 -- # run_qos_test 18000 IOPS Malloc_0 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=18000 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:08.114 11:55:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 18023.25 72093.01 0.00 0.00 73296.00 0.00 0.00 ' 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=18023.25 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 18023 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=18023 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=16200 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=19800 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 18023 -lt 16200 ']' 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 18023 -gt 19800 ']' 00:13:13.388 00:13:13.388 real 0m5.220s 00:13:13.388 user 0m0.127s 00:13:13.388 sys 0m0.021s 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.388 ************************************ 00:13:13.388 END TEST bdev_qos_iops 00:13:13.388 ************************************ 00:13:13.388 11:55:11 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:13:13.388 11:55:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:13:13.388 11:55:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:13.388 11:55:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:13:13.388 11:55:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:13.388 11:55:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:13.388 11:55:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:13:13.388 11:55:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 27331.86 109327.42 0.00 0.00 111616.00 0.00 0.00 ' 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=111616.00 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 111616 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=111616 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=10 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 10 -lt 2 ']' 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.654 11:55:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:18.654 ************************************ 00:13:18.654 START TEST bdev_qos_bw 00:13:18.654 ************************************ 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1121 -- # run_qos_test 10 BANDWIDTH Null_1 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=10 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:13:18.654 11:55:17 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 2565.23 10260.93 0.00 0.00 10504.00 0.00 0.00 ' 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=10504.00 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 10504 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=10504 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=10240 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=9216 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=11264 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 10504 -lt 9216 ']' 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 10504 -gt 11264 ']' 00:13:23.921 00:13:23.921 real 0m5.254s 00:13:23.921 user 0m0.099s 00:13:23.921 sys 0m0.050s 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.921 ************************************ 00:13:23.921 END TEST bdev_qos_bw 00:13:23.921 ************************************ 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.921 11:55:22 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:23.921 ************************************ 00:13:23.921 START TEST bdev_qos_ro_bw 00:13:23.921 ************************************ 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1121 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:13:23.921 11:55:22 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:13:29.186 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.77 2047.08 0.00 0.00 2068.00 0.00 0.00 ' 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2068.00 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2068 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2068 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -lt 1843 ']' 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2068 -gt 2252 ']' 00:13:29.187 00:13:29.187 real 0m5.170s 00:13:29.187 user 0m0.112s 00:13:29.187 sys 0m0.030s 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:29.187 11:55:27 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:13:29.187 ************************************ 00:13:29.187 END TEST bdev_qos_ro_bw 00:13:29.187 ************************************ 00:13:29.187 11:55:27 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:29.187 11:55:27 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.187 11:55:27 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:29.753 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.753 11:55:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:13:29.753 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:29.754 00:13:29.754 Latency(us) 00:13:29.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.754 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:29.754 Malloc_0 : 26.72 24438.67 95.46 0.00 0.00 10378.20 2278.87 503316.48 00:13:29.754 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:29.754 Null_1 : 26.85 25501.19 99.61 0.00 0.00 10017.54 822.92 126782.37 00:13:29.754 =================================================================================================================== 00:13:29.754 Total : 49939.86 195.08 0.00 0.00 10193.60 822.92 503316.48 00:13:29.754 0 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 129971 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@946 -- # '[' -z 129971 ']' 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@950 -- # kill -0 129971 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # uname 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 129971 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # echo 'killing process with pid 129971' 00:13:29.754 killing process with pid 129971 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@965 -- # kill 129971 00:13:29.754 Received shutdown signal, test time was about 26.883245 seconds 00:13:29.754 00:13:29.754 Latency(us) 00:13:29.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.754 =================================================================================================================== 00:13:29.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.754 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@970 -- # wait 129971 00:13:30.012 11:55:28 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:13:30.012 00:13:30.012 real 0m28.435s 00:13:30.012 user 0m29.286s 00:13:30.012 sys 0m0.623s 00:13:30.012 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:30.012 11:55:28 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:13:30.012 ************************************ 00:13:30.012 END TEST bdev_qos 00:13:30.012 ************************************ 00:13:30.012 11:55:28 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:30.012 11:55:28 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:30.012 11:55:28 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.012 11:55:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:30.012 ************************************ 00:13:30.013 START TEST bdev_qd_sampling 00:13:30.013 ************************************ 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1121 -- # qd_sampling_test_suite '' 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=130436 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 130436' 00:13:30.013 Process bdev QD sampling period testing pid: 130436 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 130436 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@827 -- # '[' -z 130436 ']' 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:30.013 11:55:28 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:30.013 [2024-07-21 11:55:28.855201] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:30.013 [2024-07-21 11:55:28.855668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130436 ] 00:13:30.282 [2024-07-21 11:55:29.029778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:30.282 [2024-07-21 11:55:29.118728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:30.282 [2024-07-21 11:55:29.118734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@860 -- # return 0 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.233 Malloc_QD 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_QD 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local i 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:31.233 [ 00:13:31.233 { 00:13:31.233 "name": "Malloc_QD", 00:13:31.233 "aliases": [ 00:13:31.233 "5977362e-93eb-4598-bba3-00e40fb1896e" 00:13:31.233 ], 00:13:31.233 "product_name": "Malloc disk", 00:13:31.233 "block_size": 512, 00:13:31.233 "num_blocks": 262144, 00:13:31.233 "uuid": "5977362e-93eb-4598-bba3-00e40fb1896e", 00:13:31.233 "assigned_rate_limits": { 00:13:31.233 "rw_ios_per_sec": 0, 00:13:31.233 "rw_mbytes_per_sec": 0, 00:13:31.233 "r_mbytes_per_sec": 0, 00:13:31.233 "w_mbytes_per_sec": 0 00:13:31.233 }, 00:13:31.233 "claimed": false, 00:13:31.233 "zoned": false, 00:13:31.233 "supported_io_types": { 00:13:31.233 "read": true, 00:13:31.233 "write": true, 00:13:31.233 "unmap": true, 00:13:31.233 "write_zeroes": true, 00:13:31.233 "flush": true, 00:13:31.233 "reset": true, 00:13:31.233 "compare": false, 00:13:31.233 "compare_and_write": false, 00:13:31.233 "abort": true, 00:13:31.233 "nvme_admin": false, 00:13:31.233 "nvme_io": false 00:13:31.233 }, 00:13:31.233 "memory_domains": [ 00:13:31.233 { 00:13:31.233 "dma_device_id": "system", 00:13:31.233 "dma_device_type": 1 00:13:31.233 }, 00:13:31.233 { 00:13:31.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.233 "dma_device_type": 2 00:13:31.233 } 00:13:31.233 ], 00:13:31.233 "driver_specific": {} 00:13:31.233 } 00:13:31.233 ] 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # return 0 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:13:31.233 11:55:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.233 Running I/O for 5 seconds... 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.133 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:13:33.133 "tick_rate": 2200000000, 00:13:33.133 "ticks": 1653292227706, 00:13:33.133 "bdevs": [ 00:13:33.133 { 00:13:33.133 "name": "Malloc_QD", 00:13:33.133 "bytes_read": 919638528, 00:13:33.133 "num_read_ops": 224515, 00:13:33.133 "bytes_written": 0, 00:13:33.133 "num_write_ops": 0, 00:13:33.133 "bytes_unmapped": 0, 00:13:33.133 "num_unmap_ops": 0, 00:13:33.133 "bytes_copied": 0, 00:13:33.133 "num_copy_ops": 0, 00:13:33.133 "read_latency_ticks": 2155030511050, 00:13:33.133 "max_read_latency_ticks": 12770554, 00:13:33.133 "min_read_latency_ticks": 410070, 00:13:33.133 "write_latency_ticks": 0, 00:13:33.133 "max_write_latency_ticks": 0, 00:13:33.133 "min_write_latency_ticks": 0, 00:13:33.133 "unmap_latency_ticks": 0, 00:13:33.133 "max_unmap_latency_ticks": 0, 00:13:33.134 "min_unmap_latency_ticks": 0, 00:13:33.134 "copy_latency_ticks": 0, 00:13:33.134 "max_copy_latency_ticks": 0, 00:13:33.134 "min_copy_latency_ticks": 0, 00:13:33.134 "io_error": {}, 00:13:33.134 "queue_depth_polling_period": 10, 00:13:33.134 "queue_depth": 512, 00:13:33.134 "io_time": 30, 00:13:33.134 "weighted_io_time": 15360 00:13:33.134 } 00:13:33.134 ] 00:13:33.134 }' 00:13:33.134 11:55:31 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.391 00:13:33.391 Latency(us) 00:13:33.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.391 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:33.391 Malloc_QD : 1.99 58363.78 227.98 0.00 0.00 4374.58 1243.69 6374.87 00:13:33.391 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:33.391 Malloc_QD : 1.99 58480.21 228.44 0.00 0.00 4365.81 1109.64 5838.66 00:13:33.391 =================================================================================================================== 00:13:33.391 Total : 116843.99 456.42 0.00 0.00 4370.19 1109.64 6374.87 00:13:33.391 0 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 130436 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@946 -- # '[' -z 130436 ']' 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@950 -- # kill -0 130436 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # uname 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130436 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130436' 00:13:33.391 killing process with pid 130436 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@965 -- # kill 130436 00:13:33.391 Received shutdown signal, test time was about 2.050900 seconds 00:13:33.391 00:13:33.391 Latency(us) 00:13:33.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.391 =================================================================================================================== 00:13:33.391 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.391 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@970 -- # wait 130436 00:13:33.649 11:55:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:13:33.649 00:13:33.649 real 0m3.559s 00:13:33.649 user 0m6.955s 00:13:33.649 sys 0m0.345s 00:13:33.649 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:33.649 11:55:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:13:33.649 ************************************ 00:13:33.649 END TEST bdev_qd_sampling 00:13:33.649 ************************************ 00:13:33.650 11:55:32 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:13:33.650 11:55:32 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:33.650 11:55:32 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:33.650 11:55:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 ************************************ 00:13:33.650 START TEST bdev_error 00:13:33.650 ************************************ 00:13:33.650 11:55:32 blockdev_general.bdev_error -- common/autotest_common.sh@1121 -- # error_test_suite '' 00:13:33.650 11:55:32 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:13:33.650 11:55:32 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:13:33.650 11:55:32 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:13:33.650 11:55:32 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=130516 00:13:33.650 11:55:32 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 130516' 00:13:33.650 Process error testing pid: 130516 00:13:33.650 11:55:32 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 130516 00:13:33.650 11:55:32 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 130516 ']' 00:13:33.650 11:55:32 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.650 11:55:32 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.650 11:55:32 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.650 11:55:32 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:33.650 11:55:32 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.650 11:55:32 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:33.650 [2024-07-21 11:55:32.474254] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:33.650 [2024-07-21 11:55:32.475053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130516 ] 00:13:33.907 [2024-07-21 11:55:32.635115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.908 [2024-07-21 11:55:32.714752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:13:34.841 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.841 Dev_1 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.841 [ 00:13:34.841 { 00:13:34.841 "name": "Dev_1", 00:13:34.841 "aliases": [ 00:13:34.841 "5c2b7ca3-a046-447f-9afd-7e0ed62eb75c" 00:13:34.841 ], 00:13:34.841 "product_name": "Malloc disk", 00:13:34.841 "block_size": 512, 00:13:34.841 "num_blocks": 262144, 00:13:34.841 "uuid": "5c2b7ca3-a046-447f-9afd-7e0ed62eb75c", 00:13:34.841 "assigned_rate_limits": { 00:13:34.841 "rw_ios_per_sec": 0, 00:13:34.841 "rw_mbytes_per_sec": 0, 00:13:34.841 "r_mbytes_per_sec": 0, 00:13:34.841 "w_mbytes_per_sec": 0 00:13:34.841 }, 00:13:34.841 "claimed": false, 00:13:34.841 "zoned": false, 00:13:34.841 "supported_io_types": { 00:13:34.841 "read": true, 00:13:34.841 "write": true, 00:13:34.841 "unmap": true, 00:13:34.841 "write_zeroes": true, 00:13:34.841 "flush": true, 00:13:34.841 "reset": true, 00:13:34.841 "compare": false, 00:13:34.841 "compare_and_write": false, 00:13:34.841 "abort": true, 00:13:34.841 "nvme_admin": false, 00:13:34.841 "nvme_io": false 00:13:34.841 }, 00:13:34.841 "memory_domains": [ 00:13:34.841 { 00:13:34.841 "dma_device_id": "system", 00:13:34.841 "dma_device_type": 1 00:13:34.841 }, 00:13:34.841 { 00:13:34.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.841 "dma_device_type": 2 00:13:34.841 } 00:13:34.841 ], 00:13:34.841 "driver_specific": {} 00:13:34.841 } 00:13:34.841 ] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:13:34.841 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.841 true 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.841 Dev_2 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.841 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.841 [ 00:13:34.841 { 00:13:34.841 "name": "Dev_2", 00:13:34.841 "aliases": [ 00:13:34.841 "e6db89ab-cd62-4254-95ee-b22a2f88fc9f" 00:13:34.841 ], 00:13:34.841 "product_name": "Malloc disk", 00:13:34.841 "block_size": 512, 00:13:34.841 "num_blocks": 262144, 00:13:34.841 "uuid": "e6db89ab-cd62-4254-95ee-b22a2f88fc9f", 00:13:34.841 "assigned_rate_limits": { 00:13:34.841 "rw_ios_per_sec": 0, 00:13:34.841 "rw_mbytes_per_sec": 0, 00:13:34.841 "r_mbytes_per_sec": 0, 00:13:34.841 "w_mbytes_per_sec": 0 00:13:34.841 }, 00:13:34.841 "claimed": false, 00:13:34.841 "zoned": false, 00:13:34.841 "supported_io_types": { 00:13:34.841 "read": true, 00:13:34.841 "write": true, 00:13:34.841 "unmap": true, 00:13:34.841 "write_zeroes": true, 00:13:34.841 "flush": true, 00:13:34.841 "reset": true, 00:13:34.841 "compare": false, 00:13:34.841 "compare_and_write": false, 00:13:34.841 "abort": true, 00:13:34.841 "nvme_admin": false, 00:13:34.842 "nvme_io": false 00:13:34.842 }, 00:13:34.842 "memory_domains": [ 00:13:34.842 { 00:13:34.842 "dma_device_id": "system", 00:13:34.842 "dma_device_type": 1 00:13:34.842 }, 00:13:34.842 { 00:13:34.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.842 "dma_device_type": 2 00:13:34.842 } 00:13:34.842 ], 00:13:34.842 "driver_specific": {} 00:13:34.842 } 00:13:34.842 ] 00:13:34.842 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.842 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:13:34.842 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:34.842 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.842 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 11:55:33 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.842 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:13:34.842 11:55:33 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:34.842 Running I/O for 5 seconds... 00:13:35.774 11:55:34 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 130516 00:13:35.774 11:55:34 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 130516' 00:13:35.774 Process is existed as continue on error is set. Pid: 130516 00:13:35.774 11:55:34 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:35.774 11:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.774 11:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:35.774 11:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.774 11:55:34 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:35.774 11:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.774 11:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:36.033 11:55:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.033 11:55:34 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:13:36.033 Timeout while waiting for response: 00:13:36.033 00:13:36.033 00:13:40.218 00:13:40.218 Latency(us) 00:13:40.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:40.218 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:40.218 EE_Dev_1 : 0.91 41638.77 162.65 5.51 0.00 381.25 175.94 1206.46 00:13:40.218 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:40.218 Dev_2 : 5.00 92418.04 361.01 0.00 0.00 170.23 55.85 24665.37 00:13:40.218 =================================================================================================================== 00:13:40.218 Total : 134056.82 523.66 5.51 0.00 186.17 55.85 24665.37 00:13:40.785 11:55:39 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 130516 00:13:40.786 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@946 -- # '[' -z 130516 ']' 00:13:40.786 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@950 -- # kill -0 130516 00:13:40.786 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # uname 00:13:41.045 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:41.045 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130516 00:13:41.045 killing process with pid 130516 00:13:41.045 Received shutdown signal, test time was about 5.000000 seconds 00:13:41.045 00:13:41.045 Latency(us) 00:13:41.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.045 =================================================================================================================== 00:13:41.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:41.045 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:41.045 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:41.045 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130516' 00:13:41.045 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@965 -- # kill 130516 00:13:41.045 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@970 -- # wait 130516 00:13:41.304 Process error testing pid: 130621 00:13:41.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.304 11:55:39 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=130621 00:13:41.304 11:55:39 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 130621' 00:13:41.304 11:55:39 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 130621 00:13:41.304 11:55:39 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:41.304 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@827 -- # '[' -z 130621 ']' 00:13:41.304 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.304 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:41.304 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.304 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:41.304 11:55:39 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:41.304 [2024-07-21 11:55:40.028503] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:41.304 [2024-07-21 11:55:40.029075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130621 ] 00:13:41.563 [2024-07-21 11:55:40.195556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.563 [2024-07-21 11:55:40.282595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:42.131 11:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:42.131 11:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@860 -- # return 0 00:13:42.131 11:55:40 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:42.131 11:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.131 11:55:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.391 Dev_1 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_1 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.391 [ 00:13:42.391 { 00:13:42.391 "name": "Dev_1", 00:13:42.391 "aliases": [ 00:13:42.391 "50b5d816-14b5-4c3e-b122-a0b4d52217a8" 00:13:42.391 ], 00:13:42.391 "product_name": "Malloc disk", 00:13:42.391 "block_size": 512, 00:13:42.391 "num_blocks": 262144, 00:13:42.391 "uuid": "50b5d816-14b5-4c3e-b122-a0b4d52217a8", 00:13:42.391 "assigned_rate_limits": { 00:13:42.391 "rw_ios_per_sec": 0, 00:13:42.391 "rw_mbytes_per_sec": 0, 00:13:42.391 "r_mbytes_per_sec": 0, 00:13:42.391 "w_mbytes_per_sec": 0 00:13:42.391 }, 00:13:42.391 "claimed": false, 00:13:42.391 "zoned": false, 00:13:42.391 "supported_io_types": { 00:13:42.391 "read": true, 00:13:42.391 "write": true, 00:13:42.391 "unmap": true, 00:13:42.391 "write_zeroes": true, 00:13:42.391 "flush": true, 00:13:42.391 "reset": true, 00:13:42.391 "compare": false, 00:13:42.391 "compare_and_write": false, 00:13:42.391 "abort": true, 00:13:42.391 "nvme_admin": false, 00:13:42.391 "nvme_io": false 00:13:42.391 }, 00:13:42.391 "memory_domains": [ 00:13:42.391 { 00:13:42.391 "dma_device_id": "system", 00:13:42.391 "dma_device_type": 1 00:13:42.391 }, 00:13:42.391 { 00:13:42.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.391 "dma_device_type": 2 00:13:42.391 } 00:13:42.391 ], 00:13:42.391 "driver_specific": {} 00:13:42.391 } 00:13:42.391 ] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:13:42.391 11:55:41 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.391 true 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.391 Dev_2 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@895 -- # local bdev_name=Dev_2 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local i 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.391 [ 00:13:42.391 { 00:13:42.391 "name": "Dev_2", 00:13:42.391 "aliases": [ 00:13:42.391 "75bc0453-7697-467f-a8f7-0b8107abe556" 00:13:42.391 ], 00:13:42.391 "product_name": "Malloc disk", 00:13:42.391 "block_size": 512, 00:13:42.391 "num_blocks": 262144, 00:13:42.391 "uuid": "75bc0453-7697-467f-a8f7-0b8107abe556", 00:13:42.391 "assigned_rate_limits": { 00:13:42.391 "rw_ios_per_sec": 0, 00:13:42.391 "rw_mbytes_per_sec": 0, 00:13:42.391 "r_mbytes_per_sec": 0, 00:13:42.391 "w_mbytes_per_sec": 0 00:13:42.391 }, 00:13:42.391 "claimed": false, 00:13:42.391 "zoned": false, 00:13:42.391 "supported_io_types": { 00:13:42.391 "read": true, 00:13:42.391 "write": true, 00:13:42.391 "unmap": true, 00:13:42.391 "write_zeroes": true, 00:13:42.391 "flush": true, 00:13:42.391 "reset": true, 00:13:42.391 "compare": false, 00:13:42.391 "compare_and_write": false, 00:13:42.391 "abort": true, 00:13:42.391 "nvme_admin": false, 00:13:42.391 "nvme_io": false 00:13:42.391 }, 00:13:42.391 "memory_domains": [ 00:13:42.391 { 00:13:42.391 "dma_device_id": "system", 00:13:42.391 "dma_device_type": 1 00:13:42.391 }, 00:13:42.391 { 00:13:42.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.391 "dma_device_type": 2 00:13:42.391 } 00:13:42.391 ], 00:13:42.391 "driver_specific": {} 00:13:42.391 } 00:13:42.391 ] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.391 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # return 0 00:13:42.392 11:55:41 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.392 11:55:41 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 130621 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:13:42.392 11:55:41 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 130621 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:42.392 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 130621 00:13:42.392 Running I/O for 5 seconds... 00:13:42.392 task offset: 43896 on job bdev=EE_Dev_1 fails 00:13:42.392 00:13:42.392 Latency(us) 00:13:42.392 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.392 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.392 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:42.392 EE_Dev_1 : 0.00 22380.47 87.42 5086.47 0.00 470.75 219.69 860.16 00:13:42.392 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.392 Dev_2 : 0.00 16563.15 64.70 0.00 0.00 631.88 161.98 1154.33 00:13:42.392 =================================================================================================================== 00:13:42.392 Total : 38943.61 152.12 5086.47 0.00 558.14 161.98 1154.33 00:13:42.392 [2024-07-21 11:55:41.236719] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:42.392 request: 00:13:42.392 { 00:13:42.392 "method": "perform_tests", 00:13:42.392 "req_id": 1 00:13:42.392 } 00:13:42.392 Got JSON-RPC error response 00:13:42.392 response: 00:13:42.392 { 00:13:42.392 "code": -32603, 00:13:42.392 "message": "bdevperf failed with error Operation not permitted" 00:13:42.392 } 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:42.960 00:13:42.960 real 0m9.192s 00:13:42.960 user 0m9.476s 00:13:42.960 sys 0m0.733s 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:42.960 11:55:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:13:42.960 ************************************ 00:13:42.960 END TEST bdev_error 00:13:42.960 ************************************ 00:13:42.960 11:55:41 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:13:42.960 11:55:41 blockdev_general -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:42.960 11:55:41 blockdev_general -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:42.960 11:55:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:42.960 ************************************ 00:13:42.960 START TEST bdev_stat 00:13:42.960 ************************************ 00:13:42.960 Process Bdev IO statistics testing pid: 130669 00:13:42.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@1121 -- # stat_test_suite '' 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=130669 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 130669' 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 130669 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@827 -- # '[' -z 130669 ']' 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:42.960 11:55:41 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:42.960 [2024-07-21 11:55:41.730484] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:42.960 [2024-07-21 11:55:41.731147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130669 ] 00:13:43.219 [2024-07-21 11:55:41.908579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:43.219 [2024-07-21 11:55:42.001444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.219 [2024-07-21 11:55:42.001454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.787 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:43.787 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@860 -- # return 0 00:13:43.787 11:55:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:43.787 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.787 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:44.045 Malloc_STAT 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@895 -- # local bdev_name=Malloc_STAT 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local i 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:44.045 [ 00:13:44.045 { 00:13:44.045 "name": "Malloc_STAT", 00:13:44.045 "aliases": [ 00:13:44.045 "83c145f7-9364-468b-8b79-3bffaa019197" 00:13:44.045 ], 00:13:44.045 "product_name": "Malloc disk", 00:13:44.045 "block_size": 512, 00:13:44.045 "num_blocks": 262144, 00:13:44.045 "uuid": "83c145f7-9364-468b-8b79-3bffaa019197", 00:13:44.045 "assigned_rate_limits": { 00:13:44.045 "rw_ios_per_sec": 0, 00:13:44.045 "rw_mbytes_per_sec": 0, 00:13:44.045 "r_mbytes_per_sec": 0, 00:13:44.045 "w_mbytes_per_sec": 0 00:13:44.045 }, 00:13:44.045 "claimed": false, 00:13:44.045 "zoned": false, 00:13:44.045 "supported_io_types": { 00:13:44.045 "read": true, 00:13:44.045 "write": true, 00:13:44.045 "unmap": true, 00:13:44.045 "write_zeroes": true, 00:13:44.045 "flush": true, 00:13:44.045 "reset": true, 00:13:44.045 "compare": false, 00:13:44.045 "compare_and_write": false, 00:13:44.045 "abort": true, 00:13:44.045 "nvme_admin": false, 00:13:44.045 "nvme_io": false 00:13:44.045 }, 00:13:44.045 "memory_domains": [ 00:13:44.045 { 00:13:44.045 "dma_device_id": "system", 00:13:44.045 "dma_device_type": 1 00:13:44.045 }, 00:13:44.045 { 00:13:44.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.045 "dma_device_type": 2 00:13:44.045 } 00:13:44.045 ], 00:13:44.045 "driver_specific": {} 00:13:44.045 } 00:13:44.045 ] 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # return 0 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:13:44.045 11:55:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:44.045 Running I/O for 10 seconds... 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:13:45.957 "tick_rate": 2200000000, 00:13:45.957 "ticks": 1681396140068, 00:13:45.957 "bdevs": [ 00:13:45.957 { 00:13:45.957 "name": "Malloc_STAT", 00:13:45.957 "bytes_read": 914395648, 00:13:45.957 "num_read_ops": 223235, 00:13:45.957 "bytes_written": 0, 00:13:45.957 "num_write_ops": 0, 00:13:45.957 "bytes_unmapped": 0, 00:13:45.957 "num_unmap_ops": 0, 00:13:45.957 "bytes_copied": 0, 00:13:45.957 "num_copy_ops": 0, 00:13:45.957 "read_latency_ticks": 2143007055955, 00:13:45.957 "max_read_latency_ticks": 14129926, 00:13:45.957 "min_read_latency_ticks": 519498, 00:13:45.957 "write_latency_ticks": 0, 00:13:45.957 "max_write_latency_ticks": 0, 00:13:45.957 "min_write_latency_ticks": 0, 00:13:45.957 "unmap_latency_ticks": 0, 00:13:45.957 "max_unmap_latency_ticks": 0, 00:13:45.957 "min_unmap_latency_ticks": 0, 00:13:45.957 "copy_latency_ticks": 0, 00:13:45.957 "max_copy_latency_ticks": 0, 00:13:45.957 "min_copy_latency_ticks": 0, 00:13:45.957 "io_error": {} 00:13:45.957 } 00:13:45.957 ] 00:13:45.957 }' 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=223235 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.957 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:13:45.957 "tick_rate": 2200000000, 00:13:45.957 "ticks": 1681539626278, 00:13:45.957 "name": "Malloc_STAT", 00:13:45.957 "channels": [ 00:13:45.957 { 00:13:45.957 "thread_id": 2, 00:13:45.957 "bytes_read": 466616320, 00:13:45.957 "num_read_ops": 113920, 00:13:45.957 "bytes_written": 0, 00:13:45.957 "num_write_ops": 0, 00:13:45.957 "bytes_unmapped": 0, 00:13:45.957 "num_unmap_ops": 0, 00:13:45.958 "bytes_copied": 0, 00:13:45.958 "num_copy_ops": 0, 00:13:45.958 "read_latency_ticks": 1107412430368, 00:13:45.958 "max_read_latency_ticks": 14181542, 00:13:45.958 "min_read_latency_ticks": 8411574, 00:13:45.958 "write_latency_ticks": 0, 00:13:45.958 "max_write_latency_ticks": 0, 00:13:45.958 "min_write_latency_ticks": 0, 00:13:45.958 "unmap_latency_ticks": 0, 00:13:45.958 "max_unmap_latency_ticks": 0, 00:13:45.958 "min_unmap_latency_ticks": 0, 00:13:45.958 "copy_latency_ticks": 0, 00:13:45.958 "max_copy_latency_ticks": 0, 00:13:45.958 "min_copy_latency_ticks": 0 00:13:45.958 }, 00:13:45.958 { 00:13:45.958 "thread_id": 3, 00:13:45.958 "bytes_read": 476053504, 00:13:45.958 "num_read_ops": 116224, 00:13:45.958 "bytes_written": 0, 00:13:45.958 "num_write_ops": 0, 00:13:45.958 "bytes_unmapped": 0, 00:13:45.958 "num_unmap_ops": 0, 00:13:45.958 "bytes_copied": 0, 00:13:45.958 "num_copy_ops": 0, 00:13:45.958 "read_latency_ticks": 1108616223287, 00:13:45.958 "max_read_latency_ticks": 13481326, 00:13:45.958 "min_read_latency_ticks": 7874010, 00:13:45.958 "write_latency_ticks": 0, 00:13:45.958 "max_write_latency_ticks": 0, 00:13:45.958 "min_write_latency_ticks": 0, 00:13:45.958 "unmap_latency_ticks": 0, 00:13:45.958 "max_unmap_latency_ticks": 0, 00:13:45.958 "min_unmap_latency_ticks": 0, 00:13:45.958 "copy_latency_ticks": 0, 00:13:45.958 "max_copy_latency_ticks": 0, 00:13:45.958 "min_copy_latency_ticks": 0 00:13:45.958 } 00:13:45.958 ] 00:13:45.958 }' 00:13:45.958 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=113920 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=113920 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=116224 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=230144 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.216 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:13:46.216 "tick_rate": 2200000000, 00:13:46.216 "ticks": 1681792344578, 00:13:46.216 "bdevs": [ 00:13:46.216 { 00:13:46.216 "name": "Malloc_STAT", 00:13:46.216 "bytes_read": 997233152, 00:13:46.216 "num_read_ops": 243459, 00:13:46.216 "bytes_written": 0, 00:13:46.216 "num_write_ops": 0, 00:13:46.216 "bytes_unmapped": 0, 00:13:46.216 "num_unmap_ops": 0, 00:13:46.216 "bytes_copied": 0, 00:13:46.216 "num_copy_ops": 0, 00:13:46.216 "read_latency_ticks": 2346094666863, 00:13:46.216 "max_read_latency_ticks": 14181542, 00:13:46.216 "min_read_latency_ticks": 519498, 00:13:46.216 "write_latency_ticks": 0, 00:13:46.216 "max_write_latency_ticks": 0, 00:13:46.216 "min_write_latency_ticks": 0, 00:13:46.216 "unmap_latency_ticks": 0, 00:13:46.216 "max_unmap_latency_ticks": 0, 00:13:46.216 "min_unmap_latency_ticks": 0, 00:13:46.216 "copy_latency_ticks": 0, 00:13:46.216 "max_copy_latency_ticks": 0, 00:13:46.216 "min_copy_latency_ticks": 0, 00:13:46.216 "io_error": {} 00:13:46.216 } 00:13:46.217 ] 00:13:46.217 }' 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=243459 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 230144 -lt 223235 ']' 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 230144 -gt 243459 ']' 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:46.217 00:13:46.217 Latency(us) 00:13:46.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.217 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:46.217 Malloc_STAT : 2.16 57582.94 224.93 0.00 0.00 4434.31 1422.43 6702.55 00:13:46.217 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:46.217 Malloc_STAT : 2.16 59000.37 230.47 0.00 0.00 4328.15 1206.46 6136.55 00:13:46.217 =================================================================================================================== 00:13:46.217 Total : 116583.32 455.40 0.00 0.00 4380.58 1206.46 6702.55 00:13:46.217 0 00:13:46.217 11:55:44 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 130669 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@946 -- # '[' -z 130669 ']' 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@950 -- # kill -0 130669 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # uname 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130669 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:46.217 killing process with pid 130669 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130669' 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@965 -- # kill 130669 00:13:46.217 Received shutdown signal, test time was about 2.212018 seconds 00:13:46.217 00:13:46.217 Latency(us) 00:13:46.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.217 =================================================================================================================== 00:13:46.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:46.217 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@970 -- # wait 130669 00:13:46.476 ************************************ 00:13:46.476 END TEST bdev_stat 00:13:46.476 ************************************ 00:13:46.476 11:55:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:13:46.476 00:13:46.476 real 0m3.655s 00:13:46.476 user 0m7.152s 00:13:46.476 sys 0m0.374s 00:13:46.476 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.476 11:55:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:13:46.734 11:55:45 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:13:46.734 ************************************ 00:13:46.734 END TEST blockdev_general 00:13:46.734 ************************************ 00:13:46.734 00:13:46.734 real 1m58.907s 00:13:46.734 user 5m17.062s 00:13:46.734 sys 0m21.561s 00:13:46.734 11:55:45 blockdev_general -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:46.734 11:55:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:13:46.734 11:55:45 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:46.734 11:55:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:46.734 11:55:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:46.734 11:55:45 -- common/autotest_common.sh@10 -- # set +x 00:13:46.734 ************************************ 00:13:46.734 START TEST bdev_raid 00:13:46.734 ************************************ 00:13:46.734 11:55:45 bdev_raid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:46.734 * Looking for test storage... 00:13:46.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:46.734 11:55:45 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' Linux = Linux ']' 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@856 -- # modprobe -n nbd 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@857 -- # has_nbd=true 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@858 -- # modprobe nbd 00:13:46.734 11:55:45 bdev_raid -- bdev/bdev_raid.sh@859 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:46.734 11:55:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:46.734 11:55:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:46.734 11:55:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.734 ************************************ 00:13:46.734 START TEST raid_function_test_raid0 00:13:46.734 ************************************ 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1121 -- # raid_function_test raid0 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@80 -- # local raid_level=raid0 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # raid_pid=130807 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 130807' 00:13:46.734 Process raid pid: 130807 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@87 -- # waitforlisten 130807 /var/tmp/spdk-raid.sock 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@827 -- # '[' -z 130807 ']' 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:46.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:46.734 11:55:45 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:46.992 [2024-07-21 11:55:45.607419] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:46.992 [2024-07-21 11:55:45.607752] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:46.992 [2024-07-21 11:55:45.780277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.297 [2024-07-21 11:55:45.865641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.297 [2024-07-21 11:55:45.920299] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.889 11:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:47.889 11:55:46 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # return 0 00:13:47.889 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev raid0 00:13:47.889 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_level=raid0 00:13:47.889 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:47.889 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # cat 00:13:47.889 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:48.147 [2024-07-21 11:55:46.937219] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:48.147 [2024-07-21 11:55:46.939379] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:48.147 [2024-07-21 11:55:46.939447] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:13:48.147 [2024-07-21 11:55:46.939459] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:48.147 [2024-07-21 11:55:46.939664] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:48.147 [2024-07-21 11:55:46.940158] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:13:48.147 [2024-07-21 11:55:46.940179] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:13:48.147 [2024-07-21 11:55:46.940380] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.147 Base_1 00:13:48.147 Base_2 00:13:48.147 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:48.147 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:48.147 11:55:46 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.404 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:48.662 [2024-07-21 11:55:47.449305] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:13:48.662 /dev/nbd0 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@865 -- # local i 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # break 00:13:48.662 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.663 1+0 records in 00:13:48.663 1+0 records out 00:13:48.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000586271 s, 7.0 MB/s 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@882 -- # size=4096 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # return 0 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:48.663 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:48.920 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:48.920 { 00:13:48.920 "nbd_device": "/dev/nbd0", 00:13:48.920 "bdev_name": "raid" 00:13:48.920 } 00:13:48.920 ]' 00:13:48.920 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:48.920 { 00:13:48.920 "nbd_device": "/dev/nbd0", 00:13:48.920 "bdev_name": "raid" 00:13:48.920 } 00:13:48.920 ]' 00:13:48.920 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # count=1 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:49.178 4096+0 records in 00:13:49.178 4096+0 records out 00:13:49.178 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0253604 s, 82.7 MB/s 00:13:49.178 11:55:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:49.437 4096+0 records in 00:13:49.437 4096+0 records out 00:13:49.437 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.260404 s, 8.1 MB/s 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:49.437 128+0 records in 00:13:49.437 128+0 records out 00:13:49.437 65536 bytes (66 kB, 64 KiB) copied, 0.0011308 s, 58.0 MB/s 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:49.437 2035+0 records in 00:13:49.437 2035+0 records out 00:13:49.437 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00427491 s, 244 MB/s 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:49.437 456+0 records in 00:13:49.437 456+0 records out 00:13:49.437 233472 bytes (233 kB, 228 KiB) copied, 0.00146587 s, 159 MB/s 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@54 -- # return 0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.437 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:49.696 [2024-07-21 11:55:48.459769] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:49.696 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@105 -- # count=0 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@110 -- # killprocess 130807 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@946 -- # '[' -z 130807 ']' 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # kill -0 130807 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # uname 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:49.955 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130807 00:13:50.213 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:50.213 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:50.213 killing process with pid 130807 00:13:50.213 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130807' 00:13:50.213 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@965 -- # kill 130807 00:13:50.213 [2024-07-21 11:55:48.826390] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.213 11:55:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # wait 130807 00:13:50.213 [2024-07-21 11:55:48.826557] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:50.213 [2024-07-21 11:55:48.826651] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:50.213 [2024-07-21 11:55:48.826667] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:13:50.213 [2024-07-21 11:55:48.848500] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:50.472 11:55:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@112 -- # return 0 00:13:50.472 00:13:50.472 real 0m3.561s 00:13:50.472 user 0m5.032s 00:13:50.472 sys 0m0.905s 00:13:50.472 11:55:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:50.472 11:55:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:13:50.472 ************************************ 00:13:50.472 END TEST raid_function_test_raid0 00:13:50.472 ************************************ 00:13:50.472 11:55:49 bdev_raid -- bdev/bdev_raid.sh@860 -- # run_test raid_function_test_concat raid_function_test concat 00:13:50.472 11:55:49 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:50.472 11:55:49 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:50.472 11:55:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:50.472 ************************************ 00:13:50.472 START TEST raid_function_test_concat 00:13:50.472 ************************************ 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1121 -- # raid_function_test concat 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@80 -- # local raid_level=concat 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@81 -- # local nbd=/dev/nbd0 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@82 -- # local raid_bdev 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # raid_pid=130958 00:13:50.472 Process raid pid: 130958 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@86 -- # echo 'Process raid pid: 130958' 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@87 -- # waitforlisten 130958 /var/tmp/spdk-raid.sock 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@827 -- # '[' -z 130958 ']' 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:50.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.472 11:55:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:50.472 [2024-07-21 11:55:49.208672] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:50.472 [2024-07-21 11:55:49.208874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:50.730 [2024-07-21 11:55:49.367694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.730 [2024-07-21 11:55:49.450605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.730 [2024-07-21 11:55:49.505043] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:51.663 11:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:51.663 11:55:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # return 0 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # configure_raid_bdev concat 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_level=concat 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@67 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # cat 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:51.664 [2024-07-21 11:55:50.485467] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:51.664 [2024-07-21 11:55:50.488338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:51.664 [2024-07-21 11:55:50.488437] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:13:51.664 [2024-07-21 11:55:50.488451] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:51.664 [2024-07-21 11:55:50.488652] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:13:51.664 [2024-07-21 11:55:50.489056] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:13:51.664 [2024-07-21 11:55:50.489080] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000006f80 00:13:51.664 [2024-07-21 11:55:50.489365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:51.664 Base_1 00:13:51.664 Base_2 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@76 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # jq -r '.[0]["name"] | select(.)' 00:13:51.664 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@90 -- # raid_bdev=raid 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # '[' raid = '' ']' 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@96 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.921 11:55:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:52.178 [2024-07-21 11:55:51.021601] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:13:52.435 /dev/nbd0 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@865 -- # local i 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # break 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:52.435 1+0 records in 00:13:52.435 1+0 records out 00:13:52.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752656 s, 5.4 MB/s 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@882 -- # size=4096 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # return 0 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:52.435 { 00:13:52.435 "nbd_device": "/dev/nbd0", 00:13:52.435 "bdev_name": "raid" 00:13:52.435 } 00:13:52.435 ]' 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:52.435 { 00:13:52.435 "nbd_device": "/dev/nbd0", 00:13:52.435 "bdev_name": "raid" 00:13:52.435 } 00:13:52.435 ]' 00:13:52.435 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # count=1 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@98 -- # '[' 1 -ne 1 ']' 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@102 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # hash blkdiscard 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local nbd=/dev/nbd0 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local blksize 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # grep -v LOG-SEC 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # cut -d ' ' -f 5 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # blksize=512 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local rw_blk_num=4096 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local rw_len=2097152 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # unmap_blk_offs=('0' '1028' '321') 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_blk_offs 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # unmap_blk_nums=('128' '2035' '456') 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_blk_nums 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@27 -- # local unmap_off 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@28 -- # local unmap_len 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:13:52.693 4096+0 records in 00:13:52.693 4096+0 records out 00:13:52.693 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0291019 s, 72.1 MB/s 00:13:52.693 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@32 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:52.951 4096+0 records in 00:13:52.951 4096+0 records out 00:13:52.951 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.27437 s, 7.6 MB/s 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@33 -- # blockdev --flushbufs /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i = 0 )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=65536 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:52.951 128+0 records in 00:13:52.951 128+0 records out 00:13:52.951 65536 bytes (66 kB, 64 KiB) copied, 0.000594829 s, 110 MB/s 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=526336 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=1041920 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:52.951 2035+0 records in 00:13:52.951 2035+0 records out 00:13:52.951 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00686895 s, 152 MB/s 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@39 -- # unmap_off=164352 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@40 -- # unmap_len=233472 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@43 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:52.951 456+0 records in 00:13:52.951 456+0 records out 00:13:52.951 233472 bytes (233 kB, 228 KiB) copied, 0.0016732 s, 140 MB/s 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@46 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@47 -- # blockdev --flushbufs /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@50 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i++ )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # (( i < 3 )) 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@54 -- # return 0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@104 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:52.951 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:53.208 [2024-07-21 11:55:51.971698] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:13:53.208 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:13:53.209 11:55:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:53.209 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:53.209 11:55:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@105 -- # count=0 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@106 -- # '[' 0 -ne 0 ']' 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@110 -- # killprocess 130958 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@946 -- # '[' -z 130958 ']' 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # kill -0 130958 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # uname 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:53.466 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 130958 00:13:53.724 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:53.724 killing process with pid 130958 00:13:53.724 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:53.724 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 130958' 00:13:53.724 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@965 -- # kill 130958 00:13:53.724 [2024-07-21 11:55:52.334568] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:53.724 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # wait 130958 00:13:53.724 [2024-07-21 11:55:52.334701] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.724 [2024-07-21 11:55:52.334786] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.724 [2024-07-21 11:55:52.334807] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name raid, state offline 00:13:53.724 [2024-07-21 11:55:52.356154] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:53.983 11:55:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@112 -- # return 0 00:13:53.983 00:13:53.983 real 0m3.461s 00:13:53.983 user 0m4.842s 00:13:53.983 sys 0m0.858s 00:13:53.983 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.983 11:55:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:13:53.983 ************************************ 00:13:53.983 END TEST raid_function_test_concat 00:13:53.983 ************************************ 00:13:53.983 11:55:52 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:13:53.983 11:55:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:53.983 11:55:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.983 11:55:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:53.983 ************************************ 00:13:53.983 START TEST raid0_resize_test 00:13:53.983 ************************************ 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1121 -- # raid0_resize_test 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=131105 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 131105' 00:13:53.983 Process raid pid: 131105 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 131105 /var/tmp/spdk-raid.sock 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@827 -- # '[' -z 131105 ']' 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:53.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:53.983 11:55:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.983 [2024-07-21 11:55:52.734923] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:53.983 [2024-07-21 11:55:52.735191] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.241 [2024-07-21 11:55:52.903939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.241 [2024-07-21 11:55:52.992050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.241 [2024-07-21 11:55:53.046283] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.173 11:55:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.173 11:55:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # return 0 00:13:55.173 11:55:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:55.173 Base_1 00:13:55.173 11:55:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:55.430 Base_2 00:13:55.430 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:55.687 [2024-07-21 11:55:54.424264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:55.687 [2024-07-21 11:55:54.426410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:55.687 [2024-07-21 11:55:54.426501] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:13:55.687 [2024-07-21 11:55:54.426526] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:55.687 [2024-07-21 11:55:54.426761] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005450 00:13:55.687 [2024-07-21 11:55:54.427140] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:13:55.687 [2024-07-21 11:55:54.427163] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000006f80 00:13:55.687 [2024-07-21 11:55:54.427400] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.687 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:55.945 [2024-07-21 11:55:54.668294] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:55.945 [2024-07-21 11:55:54.668334] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:55.945 true 00:13:55.945 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:13:55.945 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:56.202 [2024-07-21 11:55:54.892414] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.202 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:13:56.202 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:13:56.202 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:13:56.202 11:55:54 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:56.460 [2024-07-21 11:55:55.108355] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:56.460 [2024-07-21 11:55:55.108399] bdev_raid.c:2275:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:56.461 [2024-07-21 11:55:55.108461] bdev_raid.c:2289:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:13:56.461 true 00:13:56.461 11:55:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:56.461 11:55:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:13:56.461 [2024-07-21 11:55:55.324528] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 131105 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@946 -- # '[' -z 131105 ']' 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # kill -0 131105 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # uname 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131105 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:56.719 killing process with pid 131105 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131105' 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@965 -- # kill 131105 00:13:56.719 [2024-07-21 11:55:55.368380] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.719 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # wait 131105 00:13:56.719 [2024-07-21 11:55:55.368519] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.719 [2024-07-21 11:55:55.368589] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.719 [2024-07-21 11:55:55.368603] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Raid, state offline 00:13:56.719 [2024-07-21 11:55:55.369195] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.977 11:55:55 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:13:56.977 00:13:56.977 real 0m2.940s 00:13:56.977 user 0m4.610s 00:13:56.977 sys 0m0.453s 00:13:56.977 ************************************ 00:13:56.977 END TEST raid0_resize_test 00:13:56.977 ************************************ 00:13:56.977 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:56.977 11:55:55 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.977 11:55:55 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:13:56.977 11:55:55 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:13:56.978 11:55:55 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:56.978 11:55:55 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:56.978 11:55:55 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:56.978 11:55:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.978 ************************************ 00:13:56.978 START TEST raid_state_function_test 00:13:56.978 ************************************ 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 false 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=131181 00:13:56.978 Process raid pid: 131181 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131181' 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 131181 /var/tmp/spdk-raid.sock 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 131181 ']' 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:56.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:56.978 11:55:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.978 [2024-07-21 11:55:55.740347] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:56.978 [2024-07-21 11:55:55.740576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.236 [2024-07-21 11:55:55.908283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.236 [2024-07-21 11:55:55.997740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.236 [2024-07-21 11:55:56.052053] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:58.177 [2024-07-21 11:55:56.954533] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:58.177 [2024-07-21 11:55:56.954650] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:58.177 [2024-07-21 11:55:56.954666] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:58.177 [2024-07-21 11:55:56.954689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.177 11:55:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:58.434 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.434 "name": "Existed_Raid", 00:13:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.435 "strip_size_kb": 64, 00:13:58.435 "state": "configuring", 00:13:58.435 "raid_level": "raid0", 00:13:58.435 "superblock": false, 00:13:58.435 "num_base_bdevs": 2, 00:13:58.435 "num_base_bdevs_discovered": 0, 00:13:58.435 "num_base_bdevs_operational": 2, 00:13:58.435 "base_bdevs_list": [ 00:13:58.435 { 00:13:58.435 "name": "BaseBdev1", 00:13:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.435 "is_configured": false, 00:13:58.435 "data_offset": 0, 00:13:58.435 "data_size": 0 00:13:58.435 }, 00:13:58.435 { 00:13:58.435 "name": "BaseBdev2", 00:13:58.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.435 "is_configured": false, 00:13:58.435 "data_offset": 0, 00:13:58.435 "data_size": 0 00:13:58.435 } 00:13:58.435 ] 00:13:58.435 }' 00:13:58.435 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.435 11:55:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.000 11:55:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:59.258 [2024-07-21 11:55:58.102640] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:59.258 [2024-07-21 11:55:58.102719] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:13:59.258 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:59.516 [2024-07-21 11:55:58.370700] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.516 [2024-07-21 11:55:58.370789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.516 [2024-07-21 11:55:58.370802] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.516 [2024-07-21 11:55:58.370835] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.773 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:00.031 [2024-07-21 11:55:58.654032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.031 BaseBdev1 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:00.031 11:55:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:00.289 [ 00:14:00.289 { 00:14:00.289 "name": "BaseBdev1", 00:14:00.289 "aliases": [ 00:14:00.289 "53cdb4a9-af1b-45f8-a1ff-7de34a555e99" 00:14:00.289 ], 00:14:00.289 "product_name": "Malloc disk", 00:14:00.289 "block_size": 512, 00:14:00.289 "num_blocks": 65536, 00:14:00.289 "uuid": "53cdb4a9-af1b-45f8-a1ff-7de34a555e99", 00:14:00.289 "assigned_rate_limits": { 00:14:00.289 "rw_ios_per_sec": 0, 00:14:00.289 "rw_mbytes_per_sec": 0, 00:14:00.289 "r_mbytes_per_sec": 0, 00:14:00.289 "w_mbytes_per_sec": 0 00:14:00.289 }, 00:14:00.289 "claimed": true, 00:14:00.289 "claim_type": "exclusive_write", 00:14:00.289 "zoned": false, 00:14:00.289 "supported_io_types": { 00:14:00.289 "read": true, 00:14:00.289 "write": true, 00:14:00.289 "unmap": true, 00:14:00.289 "write_zeroes": true, 00:14:00.289 "flush": true, 00:14:00.289 "reset": true, 00:14:00.289 "compare": false, 00:14:00.289 "compare_and_write": false, 00:14:00.289 "abort": true, 00:14:00.289 "nvme_admin": false, 00:14:00.289 "nvme_io": false 00:14:00.289 }, 00:14:00.289 "memory_domains": [ 00:14:00.289 { 00:14:00.289 "dma_device_id": "system", 00:14:00.289 "dma_device_type": 1 00:14:00.289 }, 00:14:00.289 { 00:14:00.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:00.289 "dma_device_type": 2 00:14:00.289 } 00:14:00.289 ], 00:14:00.289 "driver_specific": {} 00:14:00.289 } 00:14:00.289 ] 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.289 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.547 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:00.547 "name": "Existed_Raid", 00:14:00.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.547 "strip_size_kb": 64, 00:14:00.547 "state": "configuring", 00:14:00.547 "raid_level": "raid0", 00:14:00.547 "superblock": false, 00:14:00.547 "num_base_bdevs": 2, 00:14:00.547 "num_base_bdevs_discovered": 1, 00:14:00.547 "num_base_bdevs_operational": 2, 00:14:00.547 "base_bdevs_list": [ 00:14:00.547 { 00:14:00.547 "name": "BaseBdev1", 00:14:00.547 "uuid": "53cdb4a9-af1b-45f8-a1ff-7de34a555e99", 00:14:00.547 "is_configured": true, 00:14:00.547 "data_offset": 0, 00:14:00.547 "data_size": 65536 00:14:00.547 }, 00:14:00.547 { 00:14:00.547 "name": "BaseBdev2", 00:14:00.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.547 "is_configured": false, 00:14:00.547 "data_offset": 0, 00:14:00.547 "data_size": 0 00:14:00.547 } 00:14:00.547 ] 00:14:00.547 }' 00:14:00.547 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:00.547 11:55:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.111 11:55:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:01.676 [2024-07-21 11:56:00.234411] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:01.676 [2024-07-21 11:56:00.234521] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:01.676 [2024-07-21 11:56:00.446506] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.676 [2024-07-21 11:56:00.448698] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.676 [2024-07-21 11:56:00.448775] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.676 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.934 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:01.934 "name": "Existed_Raid", 00:14:01.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.934 "strip_size_kb": 64, 00:14:01.934 "state": "configuring", 00:14:01.934 "raid_level": "raid0", 00:14:01.934 "superblock": false, 00:14:01.934 "num_base_bdevs": 2, 00:14:01.934 "num_base_bdevs_discovered": 1, 00:14:01.934 "num_base_bdevs_operational": 2, 00:14:01.934 "base_bdevs_list": [ 00:14:01.934 { 00:14:01.934 "name": "BaseBdev1", 00:14:01.934 "uuid": "53cdb4a9-af1b-45f8-a1ff-7de34a555e99", 00:14:01.934 "is_configured": true, 00:14:01.934 "data_offset": 0, 00:14:01.934 "data_size": 65536 00:14:01.934 }, 00:14:01.934 { 00:14:01.934 "name": "BaseBdev2", 00:14:01.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.934 "is_configured": false, 00:14:01.934 "data_offset": 0, 00:14:01.934 "data_size": 0 00:14:01.934 } 00:14:01.934 ] 00:14:01.934 }' 00:14:01.934 11:56:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:01.934 11:56:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.501 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:02.759 [2024-07-21 11:56:01.590771] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.759 [2024-07-21 11:56:01.590834] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:02.759 [2024-07-21 11:56:01.590848] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:02.759 [2024-07-21 11:56:01.591051] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:02.759 [2024-07-21 11:56:01.591559] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:02.759 [2024-07-21 11:56:01.591590] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:02.759 [2024-07-21 11:56:01.591962] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.759 BaseBdev2 00:14:02.759 11:56:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:02.759 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:02.759 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:02.759 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:02.759 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:02.759 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:02.759 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.017 11:56:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:03.275 [ 00:14:03.275 { 00:14:03.275 "name": "BaseBdev2", 00:14:03.275 "aliases": [ 00:14:03.275 "4680c58f-e4a2-45b0-b0db-4662032e9354" 00:14:03.275 ], 00:14:03.275 "product_name": "Malloc disk", 00:14:03.275 "block_size": 512, 00:14:03.275 "num_blocks": 65536, 00:14:03.275 "uuid": "4680c58f-e4a2-45b0-b0db-4662032e9354", 00:14:03.275 "assigned_rate_limits": { 00:14:03.275 "rw_ios_per_sec": 0, 00:14:03.275 "rw_mbytes_per_sec": 0, 00:14:03.275 "r_mbytes_per_sec": 0, 00:14:03.275 "w_mbytes_per_sec": 0 00:14:03.275 }, 00:14:03.275 "claimed": true, 00:14:03.275 "claim_type": "exclusive_write", 00:14:03.275 "zoned": false, 00:14:03.275 "supported_io_types": { 00:14:03.275 "read": true, 00:14:03.275 "write": true, 00:14:03.275 "unmap": true, 00:14:03.275 "write_zeroes": true, 00:14:03.275 "flush": true, 00:14:03.275 "reset": true, 00:14:03.275 "compare": false, 00:14:03.275 "compare_and_write": false, 00:14:03.275 "abort": true, 00:14:03.275 "nvme_admin": false, 00:14:03.275 "nvme_io": false 00:14:03.275 }, 00:14:03.275 "memory_domains": [ 00:14:03.275 { 00:14:03.275 "dma_device_id": "system", 00:14:03.275 "dma_device_type": 1 00:14:03.275 }, 00:14:03.275 { 00:14:03.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.275 "dma_device_type": 2 00:14:03.275 } 00:14:03.275 ], 00:14:03.275 "driver_specific": {} 00:14:03.275 } 00:14:03.275 ] 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.275 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:03.533 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.533 "name": "Existed_Raid", 00:14:03.533 "uuid": "76ef2a5b-7ea3-47b6-9e89-13adc7882e4d", 00:14:03.533 "strip_size_kb": 64, 00:14:03.533 "state": "online", 00:14:03.533 "raid_level": "raid0", 00:14:03.533 "superblock": false, 00:14:03.533 "num_base_bdevs": 2, 00:14:03.533 "num_base_bdevs_discovered": 2, 00:14:03.533 "num_base_bdevs_operational": 2, 00:14:03.533 "base_bdevs_list": [ 00:14:03.533 { 00:14:03.533 "name": "BaseBdev1", 00:14:03.533 "uuid": "53cdb4a9-af1b-45f8-a1ff-7de34a555e99", 00:14:03.533 "is_configured": true, 00:14:03.533 "data_offset": 0, 00:14:03.533 "data_size": 65536 00:14:03.533 }, 00:14:03.533 { 00:14:03.533 "name": "BaseBdev2", 00:14:03.533 "uuid": "4680c58f-e4a2-45b0-b0db-4662032e9354", 00:14:03.533 "is_configured": true, 00:14:03.533 "data_offset": 0, 00:14:03.533 "data_size": 65536 00:14:03.533 } 00:14:03.533 ] 00:14:03.533 }' 00:14:03.791 11:56:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.791 11:56:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:04.356 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:04.613 [2024-07-21 11:56:03.267493] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.613 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:04.613 "name": "Existed_Raid", 00:14:04.613 "aliases": [ 00:14:04.613 "76ef2a5b-7ea3-47b6-9e89-13adc7882e4d" 00:14:04.613 ], 00:14:04.613 "product_name": "Raid Volume", 00:14:04.613 "block_size": 512, 00:14:04.613 "num_blocks": 131072, 00:14:04.613 "uuid": "76ef2a5b-7ea3-47b6-9e89-13adc7882e4d", 00:14:04.613 "assigned_rate_limits": { 00:14:04.613 "rw_ios_per_sec": 0, 00:14:04.613 "rw_mbytes_per_sec": 0, 00:14:04.613 "r_mbytes_per_sec": 0, 00:14:04.613 "w_mbytes_per_sec": 0 00:14:04.613 }, 00:14:04.613 "claimed": false, 00:14:04.613 "zoned": false, 00:14:04.613 "supported_io_types": { 00:14:04.613 "read": true, 00:14:04.613 "write": true, 00:14:04.613 "unmap": true, 00:14:04.613 "write_zeroes": true, 00:14:04.613 "flush": true, 00:14:04.613 "reset": true, 00:14:04.613 "compare": false, 00:14:04.613 "compare_and_write": false, 00:14:04.613 "abort": false, 00:14:04.613 "nvme_admin": false, 00:14:04.613 "nvme_io": false 00:14:04.613 }, 00:14:04.613 "memory_domains": [ 00:14:04.613 { 00:14:04.613 "dma_device_id": "system", 00:14:04.613 "dma_device_type": 1 00:14:04.613 }, 00:14:04.613 { 00:14:04.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.613 "dma_device_type": 2 00:14:04.613 }, 00:14:04.613 { 00:14:04.613 "dma_device_id": "system", 00:14:04.613 "dma_device_type": 1 00:14:04.613 }, 00:14:04.613 { 00:14:04.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.613 "dma_device_type": 2 00:14:04.613 } 00:14:04.613 ], 00:14:04.613 "driver_specific": { 00:14:04.613 "raid": { 00:14:04.613 "uuid": "76ef2a5b-7ea3-47b6-9e89-13adc7882e4d", 00:14:04.613 "strip_size_kb": 64, 00:14:04.613 "state": "online", 00:14:04.613 "raid_level": "raid0", 00:14:04.613 "superblock": false, 00:14:04.613 "num_base_bdevs": 2, 00:14:04.613 "num_base_bdevs_discovered": 2, 00:14:04.613 "num_base_bdevs_operational": 2, 00:14:04.613 "base_bdevs_list": [ 00:14:04.613 { 00:14:04.613 "name": "BaseBdev1", 00:14:04.613 "uuid": "53cdb4a9-af1b-45f8-a1ff-7de34a555e99", 00:14:04.613 "is_configured": true, 00:14:04.613 "data_offset": 0, 00:14:04.613 "data_size": 65536 00:14:04.613 }, 00:14:04.613 { 00:14:04.613 "name": "BaseBdev2", 00:14:04.613 "uuid": "4680c58f-e4a2-45b0-b0db-4662032e9354", 00:14:04.613 "is_configured": true, 00:14:04.613 "data_offset": 0, 00:14:04.613 "data_size": 65536 00:14:04.613 } 00:14:04.613 ] 00:14:04.613 } 00:14:04.613 } 00:14:04.613 }' 00:14:04.613 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:04.613 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:04.613 BaseBdev2' 00:14:04.613 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:04.613 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:04.613 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:04.871 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:04.871 "name": "BaseBdev1", 00:14:04.871 "aliases": [ 00:14:04.871 "53cdb4a9-af1b-45f8-a1ff-7de34a555e99" 00:14:04.871 ], 00:14:04.871 "product_name": "Malloc disk", 00:14:04.871 "block_size": 512, 00:14:04.871 "num_blocks": 65536, 00:14:04.871 "uuid": "53cdb4a9-af1b-45f8-a1ff-7de34a555e99", 00:14:04.871 "assigned_rate_limits": { 00:14:04.871 "rw_ios_per_sec": 0, 00:14:04.871 "rw_mbytes_per_sec": 0, 00:14:04.871 "r_mbytes_per_sec": 0, 00:14:04.871 "w_mbytes_per_sec": 0 00:14:04.871 }, 00:14:04.871 "claimed": true, 00:14:04.871 "claim_type": "exclusive_write", 00:14:04.871 "zoned": false, 00:14:04.871 "supported_io_types": { 00:14:04.871 "read": true, 00:14:04.871 "write": true, 00:14:04.871 "unmap": true, 00:14:04.871 "write_zeroes": true, 00:14:04.871 "flush": true, 00:14:04.871 "reset": true, 00:14:04.871 "compare": false, 00:14:04.871 "compare_and_write": false, 00:14:04.871 "abort": true, 00:14:04.871 "nvme_admin": false, 00:14:04.871 "nvme_io": false 00:14:04.871 }, 00:14:04.871 "memory_domains": [ 00:14:04.871 { 00:14:04.871 "dma_device_id": "system", 00:14:04.871 "dma_device_type": 1 00:14:04.871 }, 00:14:04.871 { 00:14:04.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:04.871 "dma_device_type": 2 00:14:04.871 } 00:14:04.871 ], 00:14:04.871 "driver_specific": {} 00:14:04.871 }' 00:14:04.871 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.871 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:04.871 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:04.871 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:04.871 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:05.128 11:56:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:05.386 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:05.386 "name": "BaseBdev2", 00:14:05.386 "aliases": [ 00:14:05.386 "4680c58f-e4a2-45b0-b0db-4662032e9354" 00:14:05.386 ], 00:14:05.386 "product_name": "Malloc disk", 00:14:05.386 "block_size": 512, 00:14:05.386 "num_blocks": 65536, 00:14:05.386 "uuid": "4680c58f-e4a2-45b0-b0db-4662032e9354", 00:14:05.386 "assigned_rate_limits": { 00:14:05.386 "rw_ios_per_sec": 0, 00:14:05.386 "rw_mbytes_per_sec": 0, 00:14:05.386 "r_mbytes_per_sec": 0, 00:14:05.386 "w_mbytes_per_sec": 0 00:14:05.386 }, 00:14:05.386 "claimed": true, 00:14:05.386 "claim_type": "exclusive_write", 00:14:05.386 "zoned": false, 00:14:05.386 "supported_io_types": { 00:14:05.386 "read": true, 00:14:05.387 "write": true, 00:14:05.387 "unmap": true, 00:14:05.387 "write_zeroes": true, 00:14:05.387 "flush": true, 00:14:05.387 "reset": true, 00:14:05.387 "compare": false, 00:14:05.387 "compare_and_write": false, 00:14:05.387 "abort": true, 00:14:05.387 "nvme_admin": false, 00:14:05.387 "nvme_io": false 00:14:05.387 }, 00:14:05.387 "memory_domains": [ 00:14:05.387 { 00:14:05.387 "dma_device_id": "system", 00:14:05.387 "dma_device_type": 1 00:14:05.387 }, 00:14:05.387 { 00:14:05.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.387 "dma_device_type": 2 00:14:05.387 } 00:14:05.387 ], 00:14:05.387 "driver_specific": {} 00:14:05.387 }' 00:14:05.387 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.644 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:05.644 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:05.644 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.644 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:05.644 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:05.644 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.644 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:05.901 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:05.901 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.901 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:05.901 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:05.901 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:06.159 [2024-07-21 11:56:04.850412] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.159 [2024-07-21 11:56:04.850454] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.159 [2024-07-21 11:56:04.850562] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.159 11:56:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.417 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.417 "name": "Existed_Raid", 00:14:06.417 "uuid": "76ef2a5b-7ea3-47b6-9e89-13adc7882e4d", 00:14:06.417 "strip_size_kb": 64, 00:14:06.417 "state": "offline", 00:14:06.417 "raid_level": "raid0", 00:14:06.417 "superblock": false, 00:14:06.417 "num_base_bdevs": 2, 00:14:06.417 "num_base_bdevs_discovered": 1, 00:14:06.417 "num_base_bdevs_operational": 1, 00:14:06.417 "base_bdevs_list": [ 00:14:06.417 { 00:14:06.417 "name": null, 00:14:06.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.417 "is_configured": false, 00:14:06.417 "data_offset": 0, 00:14:06.417 "data_size": 65536 00:14:06.417 }, 00:14:06.417 { 00:14:06.417 "name": "BaseBdev2", 00:14:06.417 "uuid": "4680c58f-e4a2-45b0-b0db-4662032e9354", 00:14:06.417 "is_configured": true, 00:14:06.417 "data_offset": 0, 00:14:06.417 "data_size": 65536 00:14:06.417 } 00:14:06.417 ] 00:14:06.417 }' 00:14:06.417 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.417 11:56:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.983 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:06.983 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:06.983 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.983 11:56:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:07.241 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:07.241 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:07.241 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:07.499 [2024-07-21 11:56:06.251380] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.499 [2024-07-21 11:56:06.251468] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:07.499 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:07.499 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:07.499 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.499 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 131181 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 131181 ']' 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 131181 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131181 00:14:07.758 killing process with pid 131181 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131181' 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 131181 00:14:07.758 [2024-07-21 11:56:06.517536] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.758 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 131181 00:14:07.758 [2024-07-21 11:56:06.517632] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:08.017 00:14:08.017 real 0m11.088s 00:14:08.017 user 0m20.373s 00:14:08.017 sys 0m1.452s 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.017 ************************************ 00:14:08.017 END TEST raid_state_function_test 00:14:08.017 ************************************ 00:14:08.017 11:56:06 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:08.017 11:56:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:08.017 11:56:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:08.017 11:56:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.017 ************************************ 00:14:08.017 START TEST raid_state_function_test_sb 00:14:08.017 ************************************ 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 2 true 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=131559 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 131559' 00:14:08.017 Process raid pid: 131559 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 131559 /var/tmp/spdk-raid.sock 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 131559 ']' 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:08.017 11:56:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.017 [2024-07-21 11:56:06.879673] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:08.017 [2024-07-21 11:56:06.879972] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.280 [2024-07-21 11:56:07.050745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.280 [2024-07-21 11:56:07.125090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.547 [2024-07-21 11:56:07.179355] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:09.114 11:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:09.114 11:56:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:14:09.114 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:09.114 [2024-07-21 11:56:07.969249] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.114 [2024-07-21 11:56:07.969620] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.115 [2024-07-21 11:56:07.969754] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.115 [2024-07-21 11:56:07.969820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.373 11:56:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.631 11:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.631 "name": "Existed_Raid", 00:14:09.631 "uuid": "9542b2cf-5d9b-4ce6-9304-d0922c5e3a46", 00:14:09.631 "strip_size_kb": 64, 00:14:09.631 "state": "configuring", 00:14:09.631 "raid_level": "raid0", 00:14:09.631 "superblock": true, 00:14:09.631 "num_base_bdevs": 2, 00:14:09.631 "num_base_bdevs_discovered": 0, 00:14:09.631 "num_base_bdevs_operational": 2, 00:14:09.631 "base_bdevs_list": [ 00:14:09.631 { 00:14:09.631 "name": "BaseBdev1", 00:14:09.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.631 "is_configured": false, 00:14:09.631 "data_offset": 0, 00:14:09.631 "data_size": 0 00:14:09.631 }, 00:14:09.631 { 00:14:09.631 "name": "BaseBdev2", 00:14:09.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.631 "is_configured": false, 00:14:09.631 "data_offset": 0, 00:14:09.631 "data_size": 0 00:14:09.631 } 00:14:09.631 ] 00:14:09.631 }' 00:14:09.631 11:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.631 11:56:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.198 11:56:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:10.456 [2024-07-21 11:56:09.077259] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.456 [2024-07-21 11:56:09.077570] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:10.456 11:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:10.456 [2024-07-21 11:56:09.281365] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.456 [2024-07-21 11:56:09.281635] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.456 [2024-07-21 11:56:09.281748] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.456 [2024-07-21 11:56:09.281824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.456 11:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:10.715 [2024-07-21 11:56:09.572528] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.715 BaseBdev1 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.974 11:56:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:11.232 [ 00:14:11.232 { 00:14:11.232 "name": "BaseBdev1", 00:14:11.232 "aliases": [ 00:14:11.232 "89bca0d1-cbd6-4b26-8ccf-90f757164f5c" 00:14:11.232 ], 00:14:11.232 "product_name": "Malloc disk", 00:14:11.232 "block_size": 512, 00:14:11.232 "num_blocks": 65536, 00:14:11.232 "uuid": "89bca0d1-cbd6-4b26-8ccf-90f757164f5c", 00:14:11.232 "assigned_rate_limits": { 00:14:11.232 "rw_ios_per_sec": 0, 00:14:11.232 "rw_mbytes_per_sec": 0, 00:14:11.232 "r_mbytes_per_sec": 0, 00:14:11.232 "w_mbytes_per_sec": 0 00:14:11.232 }, 00:14:11.232 "claimed": true, 00:14:11.232 "claim_type": "exclusive_write", 00:14:11.232 "zoned": false, 00:14:11.232 "supported_io_types": { 00:14:11.232 "read": true, 00:14:11.232 "write": true, 00:14:11.232 "unmap": true, 00:14:11.232 "write_zeroes": true, 00:14:11.232 "flush": true, 00:14:11.232 "reset": true, 00:14:11.232 "compare": false, 00:14:11.232 "compare_and_write": false, 00:14:11.232 "abort": true, 00:14:11.232 "nvme_admin": false, 00:14:11.232 "nvme_io": false 00:14:11.232 }, 00:14:11.232 "memory_domains": [ 00:14:11.232 { 00:14:11.232 "dma_device_id": "system", 00:14:11.232 "dma_device_type": 1 00:14:11.232 }, 00:14:11.232 { 00:14:11.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.232 "dma_device_type": 2 00:14:11.232 } 00:14:11.232 ], 00:14:11.232 "driver_specific": {} 00:14:11.232 } 00:14:11.232 ] 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.232 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.491 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.491 "name": "Existed_Raid", 00:14:11.491 "uuid": "c8f4a1a6-3eb0-4be4-aec0-69f18d029af9", 00:14:11.491 "strip_size_kb": 64, 00:14:11.491 "state": "configuring", 00:14:11.491 "raid_level": "raid0", 00:14:11.491 "superblock": true, 00:14:11.491 "num_base_bdevs": 2, 00:14:11.491 "num_base_bdevs_discovered": 1, 00:14:11.491 "num_base_bdevs_operational": 2, 00:14:11.491 "base_bdevs_list": [ 00:14:11.491 { 00:14:11.491 "name": "BaseBdev1", 00:14:11.491 "uuid": "89bca0d1-cbd6-4b26-8ccf-90f757164f5c", 00:14:11.491 "is_configured": true, 00:14:11.491 "data_offset": 2048, 00:14:11.491 "data_size": 63488 00:14:11.491 }, 00:14:11.491 { 00:14:11.491 "name": "BaseBdev2", 00:14:11.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.491 "is_configured": false, 00:14:11.491 "data_offset": 0, 00:14:11.491 "data_size": 0 00:14:11.491 } 00:14:11.491 ] 00:14:11.491 }' 00:14:11.491 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.491 11:56:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.059 11:56:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:12.318 [2024-07-21 11:56:11.104867] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:12.318 [2024-07-21 11:56:11.105200] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:12.318 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:12.576 [2024-07-21 11:56:11.312972] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:12.576 [2024-07-21 11:56:11.315565] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:12.576 [2024-07-21 11:56:11.315794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.576 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.834 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.834 "name": "Existed_Raid", 00:14:12.834 "uuid": "0a0fffbd-4131-4b7e-8cbf-14f18fe622ac", 00:14:12.834 "strip_size_kb": 64, 00:14:12.834 "state": "configuring", 00:14:12.834 "raid_level": "raid0", 00:14:12.834 "superblock": true, 00:14:12.834 "num_base_bdevs": 2, 00:14:12.834 "num_base_bdevs_discovered": 1, 00:14:12.834 "num_base_bdevs_operational": 2, 00:14:12.834 "base_bdevs_list": [ 00:14:12.834 { 00:14:12.834 "name": "BaseBdev1", 00:14:12.834 "uuid": "89bca0d1-cbd6-4b26-8ccf-90f757164f5c", 00:14:12.834 "is_configured": true, 00:14:12.834 "data_offset": 2048, 00:14:12.834 "data_size": 63488 00:14:12.834 }, 00:14:12.834 { 00:14:12.834 "name": "BaseBdev2", 00:14:12.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.834 "is_configured": false, 00:14:12.834 "data_offset": 0, 00:14:12.834 "data_size": 0 00:14:12.834 } 00:14:12.834 ] 00:14:12.834 }' 00:14:12.834 11:56:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.834 11:56:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.399 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:13.657 [2024-07-21 11:56:12.402507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:13.657 [2024-07-21 11:56:12.402840] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:13.657 [2024-07-21 11:56:12.402860] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:13.657 [2024-07-21 11:56:12.403072] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:13.657 [2024-07-21 11:56:12.403618] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:13.657 [2024-07-21 11:56:12.403648] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:13.657 BaseBdev2 00:14:13.657 [2024-07-21 11:56:12.403920] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.657 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:13.657 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:13.657 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:13.657 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:13.657 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:13.657 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:13.657 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:13.916 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:14.174 [ 00:14:14.174 { 00:14:14.174 "name": "BaseBdev2", 00:14:14.174 "aliases": [ 00:14:14.174 "80214396-5bec-4238-8950-c521426ba087" 00:14:14.174 ], 00:14:14.174 "product_name": "Malloc disk", 00:14:14.174 "block_size": 512, 00:14:14.174 "num_blocks": 65536, 00:14:14.174 "uuid": "80214396-5bec-4238-8950-c521426ba087", 00:14:14.174 "assigned_rate_limits": { 00:14:14.174 "rw_ios_per_sec": 0, 00:14:14.174 "rw_mbytes_per_sec": 0, 00:14:14.174 "r_mbytes_per_sec": 0, 00:14:14.174 "w_mbytes_per_sec": 0 00:14:14.174 }, 00:14:14.174 "claimed": true, 00:14:14.174 "claim_type": "exclusive_write", 00:14:14.174 "zoned": false, 00:14:14.174 "supported_io_types": { 00:14:14.174 "read": true, 00:14:14.174 "write": true, 00:14:14.174 "unmap": true, 00:14:14.174 "write_zeroes": true, 00:14:14.174 "flush": true, 00:14:14.174 "reset": true, 00:14:14.174 "compare": false, 00:14:14.174 "compare_and_write": false, 00:14:14.174 "abort": true, 00:14:14.174 "nvme_admin": false, 00:14:14.174 "nvme_io": false 00:14:14.174 }, 00:14:14.174 "memory_domains": [ 00:14:14.174 { 00:14:14.174 "dma_device_id": "system", 00:14:14.174 "dma_device_type": 1 00:14:14.174 }, 00:14:14.174 { 00:14:14.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.174 "dma_device_type": 2 00:14:14.174 } 00:14:14.174 ], 00:14:14.174 "driver_specific": {} 00:14:14.174 } 00:14:14.174 ] 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.174 11:56:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.432 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.432 "name": "Existed_Raid", 00:14:14.432 "uuid": "0a0fffbd-4131-4b7e-8cbf-14f18fe622ac", 00:14:14.432 "strip_size_kb": 64, 00:14:14.432 "state": "online", 00:14:14.432 "raid_level": "raid0", 00:14:14.432 "superblock": true, 00:14:14.432 "num_base_bdevs": 2, 00:14:14.432 "num_base_bdevs_discovered": 2, 00:14:14.432 "num_base_bdevs_operational": 2, 00:14:14.432 "base_bdevs_list": [ 00:14:14.432 { 00:14:14.432 "name": "BaseBdev1", 00:14:14.432 "uuid": "89bca0d1-cbd6-4b26-8ccf-90f757164f5c", 00:14:14.432 "is_configured": true, 00:14:14.432 "data_offset": 2048, 00:14:14.432 "data_size": 63488 00:14:14.432 }, 00:14:14.432 { 00:14:14.432 "name": "BaseBdev2", 00:14:14.432 "uuid": "80214396-5bec-4238-8950-c521426ba087", 00:14:14.432 "is_configured": true, 00:14:14.432 "data_offset": 2048, 00:14:14.432 "data_size": 63488 00:14:14.432 } 00:14:14.432 ] 00:14:14.432 }' 00:14:14.432 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.432 11:56:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:15.009 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:15.266 [2024-07-21 11:56:13.963447] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.266 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:15.266 "name": "Existed_Raid", 00:14:15.266 "aliases": [ 00:14:15.266 "0a0fffbd-4131-4b7e-8cbf-14f18fe622ac" 00:14:15.266 ], 00:14:15.266 "product_name": "Raid Volume", 00:14:15.266 "block_size": 512, 00:14:15.266 "num_blocks": 126976, 00:14:15.266 "uuid": "0a0fffbd-4131-4b7e-8cbf-14f18fe622ac", 00:14:15.266 "assigned_rate_limits": { 00:14:15.266 "rw_ios_per_sec": 0, 00:14:15.266 "rw_mbytes_per_sec": 0, 00:14:15.266 "r_mbytes_per_sec": 0, 00:14:15.266 "w_mbytes_per_sec": 0 00:14:15.266 }, 00:14:15.266 "claimed": false, 00:14:15.266 "zoned": false, 00:14:15.266 "supported_io_types": { 00:14:15.266 "read": true, 00:14:15.266 "write": true, 00:14:15.266 "unmap": true, 00:14:15.266 "write_zeroes": true, 00:14:15.266 "flush": true, 00:14:15.266 "reset": true, 00:14:15.266 "compare": false, 00:14:15.266 "compare_and_write": false, 00:14:15.266 "abort": false, 00:14:15.266 "nvme_admin": false, 00:14:15.266 "nvme_io": false 00:14:15.266 }, 00:14:15.266 "memory_domains": [ 00:14:15.266 { 00:14:15.266 "dma_device_id": "system", 00:14:15.266 "dma_device_type": 1 00:14:15.266 }, 00:14:15.266 { 00:14:15.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.266 "dma_device_type": 2 00:14:15.266 }, 00:14:15.266 { 00:14:15.266 "dma_device_id": "system", 00:14:15.266 "dma_device_type": 1 00:14:15.266 }, 00:14:15.266 { 00:14:15.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.266 "dma_device_type": 2 00:14:15.266 } 00:14:15.266 ], 00:14:15.266 "driver_specific": { 00:14:15.266 "raid": { 00:14:15.266 "uuid": "0a0fffbd-4131-4b7e-8cbf-14f18fe622ac", 00:14:15.266 "strip_size_kb": 64, 00:14:15.266 "state": "online", 00:14:15.266 "raid_level": "raid0", 00:14:15.266 "superblock": true, 00:14:15.266 "num_base_bdevs": 2, 00:14:15.266 "num_base_bdevs_discovered": 2, 00:14:15.266 "num_base_bdevs_operational": 2, 00:14:15.266 "base_bdevs_list": [ 00:14:15.266 { 00:14:15.266 "name": "BaseBdev1", 00:14:15.266 "uuid": "89bca0d1-cbd6-4b26-8ccf-90f757164f5c", 00:14:15.266 "is_configured": true, 00:14:15.266 "data_offset": 2048, 00:14:15.266 "data_size": 63488 00:14:15.266 }, 00:14:15.266 { 00:14:15.266 "name": "BaseBdev2", 00:14:15.266 "uuid": "80214396-5bec-4238-8950-c521426ba087", 00:14:15.266 "is_configured": true, 00:14:15.266 "data_offset": 2048, 00:14:15.266 "data_size": 63488 00:14:15.266 } 00:14:15.266 ] 00:14:15.266 } 00:14:15.266 } 00:14:15.266 }' 00:14:15.266 11:56:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.266 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:15.266 BaseBdev2' 00:14:15.266 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.266 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.266 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:15.523 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.523 "name": "BaseBdev1", 00:14:15.523 "aliases": [ 00:14:15.523 "89bca0d1-cbd6-4b26-8ccf-90f757164f5c" 00:14:15.523 ], 00:14:15.523 "product_name": "Malloc disk", 00:14:15.523 "block_size": 512, 00:14:15.523 "num_blocks": 65536, 00:14:15.523 "uuid": "89bca0d1-cbd6-4b26-8ccf-90f757164f5c", 00:14:15.523 "assigned_rate_limits": { 00:14:15.523 "rw_ios_per_sec": 0, 00:14:15.523 "rw_mbytes_per_sec": 0, 00:14:15.523 "r_mbytes_per_sec": 0, 00:14:15.523 "w_mbytes_per_sec": 0 00:14:15.523 }, 00:14:15.523 "claimed": true, 00:14:15.523 "claim_type": "exclusive_write", 00:14:15.523 "zoned": false, 00:14:15.523 "supported_io_types": { 00:14:15.523 "read": true, 00:14:15.523 "write": true, 00:14:15.523 "unmap": true, 00:14:15.523 "write_zeroes": true, 00:14:15.523 "flush": true, 00:14:15.523 "reset": true, 00:14:15.523 "compare": false, 00:14:15.523 "compare_and_write": false, 00:14:15.523 "abort": true, 00:14:15.523 "nvme_admin": false, 00:14:15.524 "nvme_io": false 00:14:15.524 }, 00:14:15.524 "memory_domains": [ 00:14:15.524 { 00:14:15.524 "dma_device_id": "system", 00:14:15.524 "dma_device_type": 1 00:14:15.524 }, 00:14:15.524 { 00:14:15.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.524 "dma_device_type": 2 00:14:15.524 } 00:14:15.524 ], 00:14:15.524 "driver_specific": {} 00:14:15.524 }' 00:14:15.524 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.524 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.781 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.039 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.039 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.039 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.039 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:16.297 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.297 "name": "BaseBdev2", 00:14:16.297 "aliases": [ 00:14:16.297 "80214396-5bec-4238-8950-c521426ba087" 00:14:16.297 ], 00:14:16.297 "product_name": "Malloc disk", 00:14:16.297 "block_size": 512, 00:14:16.297 "num_blocks": 65536, 00:14:16.297 "uuid": "80214396-5bec-4238-8950-c521426ba087", 00:14:16.297 "assigned_rate_limits": { 00:14:16.297 "rw_ios_per_sec": 0, 00:14:16.297 "rw_mbytes_per_sec": 0, 00:14:16.297 "r_mbytes_per_sec": 0, 00:14:16.297 "w_mbytes_per_sec": 0 00:14:16.297 }, 00:14:16.297 "claimed": true, 00:14:16.297 "claim_type": "exclusive_write", 00:14:16.297 "zoned": false, 00:14:16.297 "supported_io_types": { 00:14:16.297 "read": true, 00:14:16.297 "write": true, 00:14:16.297 "unmap": true, 00:14:16.297 "write_zeroes": true, 00:14:16.297 "flush": true, 00:14:16.297 "reset": true, 00:14:16.297 "compare": false, 00:14:16.297 "compare_and_write": false, 00:14:16.297 "abort": true, 00:14:16.297 "nvme_admin": false, 00:14:16.297 "nvme_io": false 00:14:16.297 }, 00:14:16.297 "memory_domains": [ 00:14:16.297 { 00:14:16.297 "dma_device_id": "system", 00:14:16.297 "dma_device_type": 1 00:14:16.297 }, 00:14:16.297 { 00:14:16.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.297 "dma_device_type": 2 00:14:16.297 } 00:14:16.297 ], 00:14:16.297 "driver_specific": {} 00:14:16.297 }' 00:14:16.297 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.297 11:56:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.297 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.297 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.297 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.297 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.297 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.555 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.555 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.555 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.555 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.555 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.555 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:16.814 [2024-07-21 11:56:15.543716] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.814 [2024-07-21 11:56:15.543754] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.814 [2024-07-21 11:56:15.543884] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.814 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.072 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.072 "name": "Existed_Raid", 00:14:17.072 "uuid": "0a0fffbd-4131-4b7e-8cbf-14f18fe622ac", 00:14:17.072 "strip_size_kb": 64, 00:14:17.072 "state": "offline", 00:14:17.072 "raid_level": "raid0", 00:14:17.072 "superblock": true, 00:14:17.072 "num_base_bdevs": 2, 00:14:17.072 "num_base_bdevs_discovered": 1, 00:14:17.072 "num_base_bdevs_operational": 1, 00:14:17.072 "base_bdevs_list": [ 00:14:17.072 { 00:14:17.072 "name": null, 00:14:17.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.072 "is_configured": false, 00:14:17.072 "data_offset": 2048, 00:14:17.072 "data_size": 63488 00:14:17.072 }, 00:14:17.072 { 00:14:17.072 "name": "BaseBdev2", 00:14:17.072 "uuid": "80214396-5bec-4238-8950-c521426ba087", 00:14:17.072 "is_configured": true, 00:14:17.072 "data_offset": 2048, 00:14:17.072 "data_size": 63488 00:14:17.072 } 00:14:17.072 ] 00:14:17.072 }' 00:14:17.072 11:56:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.072 11:56:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.638 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:17.638 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.638 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:17.638 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.896 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:17.896 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.896 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:18.155 [2024-07-21 11:56:16.918381] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:18.155 [2024-07-21 11:56:16.918830] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:18.155 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:18.155 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:18.155 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.155 11:56:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 131559 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 131559 ']' 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 131559 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131559 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131559' 00:14:18.413 killing process with pid 131559 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 131559 00:14:18.413 [2024-07-21 11:56:17.238527] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:18.413 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 131559 00:14:18.413 [2024-07-21 11:56:17.238794] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.672 11:56:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:18.672 00:14:18.672 real 0m10.662s 00:14:18.672 user 0m19.778s 00:14:18.672 sys 0m1.188s 00:14:18.672 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:18.672 11:56:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.672 ************************************ 00:14:18.672 END TEST raid_state_function_test_sb 00:14:18.672 ************************************ 00:14:18.672 11:56:17 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:18.672 11:56:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:14:18.672 11:56:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.672 11:56:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.672 ************************************ 00:14:18.672 START TEST raid_superblock_test 00:14:18.672 ************************************ 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 2 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:18.672 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=131935 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 131935 /var/tmp/spdk-raid.sock 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 131935 ']' 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:18.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:18.930 11:56:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.930 [2024-07-21 11:56:17.587481] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:18.930 [2024-07-21 11:56:17.587993] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131935 ] 00:14:18.930 [2024-07-21 11:56:17.744492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.187 [2024-07-21 11:56:17.826240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.187 [2024-07-21 11:56:17.880455] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:19.754 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:20.012 malloc1 00:14:20.012 11:56:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:20.270 [2024-07-21 11:56:19.010883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:20.270 [2024-07-21 11:56:19.011339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.270 [2024-07-21 11:56:19.011553] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:14:20.270 [2024-07-21 11:56:19.011712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.270 [2024-07-21 11:56:19.014474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.270 [2024-07-21 11:56:19.014725] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:20.270 pt1 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:20.270 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:20.527 malloc2 00:14:20.527 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:20.785 [2024-07-21 11:56:19.502405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:20.785 [2024-07-21 11:56:19.502724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:20.785 [2024-07-21 11:56:19.502836] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:20.785 [2024-07-21 11:56:19.503087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:20.785 [2024-07-21 11:56:19.505688] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:20.785 [2024-07-21 11:56:19.505886] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:20.785 pt2 00:14:20.785 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:20.785 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:20.785 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:21.043 [2024-07-21 11:56:19.778750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:21.043 [2024-07-21 11:56:19.781207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:21.043 [2024-07-21 11:56:19.781595] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:14:21.043 [2024-07-21 11:56:19.781726] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:21.043 [2024-07-21 11:56:19.781946] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:21.043 [2024-07-21 11:56:19.782452] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:14:21.043 [2024-07-21 11:56:19.782641] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:14:21.043 [2024-07-21 11:56:19.782967] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.043 11:56:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.301 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.301 "name": "raid_bdev1", 00:14:21.301 "uuid": "94806acb-ca41-4c2d-ad85-61f44b502bd7", 00:14:21.301 "strip_size_kb": 64, 00:14:21.301 "state": "online", 00:14:21.301 "raid_level": "raid0", 00:14:21.301 "superblock": true, 00:14:21.301 "num_base_bdevs": 2, 00:14:21.301 "num_base_bdevs_discovered": 2, 00:14:21.301 "num_base_bdevs_operational": 2, 00:14:21.301 "base_bdevs_list": [ 00:14:21.301 { 00:14:21.301 "name": "pt1", 00:14:21.301 "uuid": "b80c0d85-52c3-5752-92b9-3369d6ea4c69", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 2048, 00:14:21.301 "data_size": 63488 00:14:21.301 }, 00:14:21.301 { 00:14:21.301 "name": "pt2", 00:14:21.301 "uuid": "5e451bcb-9765-5d44-9175-cc73016a06a2", 00:14:21.301 "is_configured": true, 00:14:21.301 "data_offset": 2048, 00:14:21.301 "data_size": 63488 00:14:21.301 } 00:14:21.301 ] 00:14:21.301 }' 00:14:21.301 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.301 11:56:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:21.865 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:22.123 [2024-07-21 11:56:20.899556] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.123 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:22.123 "name": "raid_bdev1", 00:14:22.123 "aliases": [ 00:14:22.123 "94806acb-ca41-4c2d-ad85-61f44b502bd7" 00:14:22.123 ], 00:14:22.123 "product_name": "Raid Volume", 00:14:22.123 "block_size": 512, 00:14:22.123 "num_blocks": 126976, 00:14:22.123 "uuid": "94806acb-ca41-4c2d-ad85-61f44b502bd7", 00:14:22.123 "assigned_rate_limits": { 00:14:22.123 "rw_ios_per_sec": 0, 00:14:22.123 "rw_mbytes_per_sec": 0, 00:14:22.123 "r_mbytes_per_sec": 0, 00:14:22.123 "w_mbytes_per_sec": 0 00:14:22.123 }, 00:14:22.123 "claimed": false, 00:14:22.123 "zoned": false, 00:14:22.123 "supported_io_types": { 00:14:22.123 "read": true, 00:14:22.123 "write": true, 00:14:22.123 "unmap": true, 00:14:22.123 "write_zeroes": true, 00:14:22.123 "flush": true, 00:14:22.123 "reset": true, 00:14:22.123 "compare": false, 00:14:22.123 "compare_and_write": false, 00:14:22.123 "abort": false, 00:14:22.123 "nvme_admin": false, 00:14:22.123 "nvme_io": false 00:14:22.123 }, 00:14:22.123 "memory_domains": [ 00:14:22.123 { 00:14:22.123 "dma_device_id": "system", 00:14:22.123 "dma_device_type": 1 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.123 "dma_device_type": 2 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "dma_device_id": "system", 00:14:22.123 "dma_device_type": 1 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.123 "dma_device_type": 2 00:14:22.123 } 00:14:22.123 ], 00:14:22.123 "driver_specific": { 00:14:22.123 "raid": { 00:14:22.123 "uuid": "94806acb-ca41-4c2d-ad85-61f44b502bd7", 00:14:22.123 "strip_size_kb": 64, 00:14:22.123 "state": "online", 00:14:22.123 "raid_level": "raid0", 00:14:22.123 "superblock": true, 00:14:22.123 "num_base_bdevs": 2, 00:14:22.123 "num_base_bdevs_discovered": 2, 00:14:22.123 "num_base_bdevs_operational": 2, 00:14:22.123 "base_bdevs_list": [ 00:14:22.123 { 00:14:22.123 "name": "pt1", 00:14:22.123 "uuid": "b80c0d85-52c3-5752-92b9-3369d6ea4c69", 00:14:22.123 "is_configured": true, 00:14:22.123 "data_offset": 2048, 00:14:22.123 "data_size": 63488 00:14:22.123 }, 00:14:22.123 { 00:14:22.123 "name": "pt2", 00:14:22.123 "uuid": "5e451bcb-9765-5d44-9175-cc73016a06a2", 00:14:22.123 "is_configured": true, 00:14:22.123 "data_offset": 2048, 00:14:22.123 "data_size": 63488 00:14:22.123 } 00:14:22.123 ] 00:14:22.123 } 00:14:22.123 } 00:14:22.123 }' 00:14:22.123 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:22.123 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:22.123 pt2' 00:14:22.123 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:22.123 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:22.123 11:56:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:22.382 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:22.382 "name": "pt1", 00:14:22.382 "aliases": [ 00:14:22.382 "b80c0d85-52c3-5752-92b9-3369d6ea4c69" 00:14:22.382 ], 00:14:22.382 "product_name": "passthru", 00:14:22.382 "block_size": 512, 00:14:22.382 "num_blocks": 65536, 00:14:22.382 "uuid": "b80c0d85-52c3-5752-92b9-3369d6ea4c69", 00:14:22.382 "assigned_rate_limits": { 00:14:22.382 "rw_ios_per_sec": 0, 00:14:22.382 "rw_mbytes_per_sec": 0, 00:14:22.382 "r_mbytes_per_sec": 0, 00:14:22.382 "w_mbytes_per_sec": 0 00:14:22.382 }, 00:14:22.382 "claimed": true, 00:14:22.382 "claim_type": "exclusive_write", 00:14:22.382 "zoned": false, 00:14:22.382 "supported_io_types": { 00:14:22.382 "read": true, 00:14:22.382 "write": true, 00:14:22.382 "unmap": true, 00:14:22.382 "write_zeroes": true, 00:14:22.382 "flush": true, 00:14:22.382 "reset": true, 00:14:22.382 "compare": false, 00:14:22.382 "compare_and_write": false, 00:14:22.382 "abort": true, 00:14:22.382 "nvme_admin": false, 00:14:22.382 "nvme_io": false 00:14:22.382 }, 00:14:22.382 "memory_domains": [ 00:14:22.382 { 00:14:22.382 "dma_device_id": "system", 00:14:22.382 "dma_device_type": 1 00:14:22.382 }, 00:14:22.382 { 00:14:22.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.382 "dma_device_type": 2 00:14:22.382 } 00:14:22.382 ], 00:14:22.382 "driver_specific": { 00:14:22.382 "passthru": { 00:14:22.382 "name": "pt1", 00:14:22.382 "base_bdev_name": "malloc1" 00:14:22.382 } 00:14:22.382 } 00:14:22.382 }' 00:14:22.382 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.640 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:22.640 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:22.640 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.640 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:22.640 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:22.640 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.640 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:22.899 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:22.899 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.899 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:22.899 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:22.899 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:22.899 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:22.899 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:23.157 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:23.157 "name": "pt2", 00:14:23.157 "aliases": [ 00:14:23.157 "5e451bcb-9765-5d44-9175-cc73016a06a2" 00:14:23.157 ], 00:14:23.157 "product_name": "passthru", 00:14:23.157 "block_size": 512, 00:14:23.157 "num_blocks": 65536, 00:14:23.157 "uuid": "5e451bcb-9765-5d44-9175-cc73016a06a2", 00:14:23.157 "assigned_rate_limits": { 00:14:23.157 "rw_ios_per_sec": 0, 00:14:23.157 "rw_mbytes_per_sec": 0, 00:14:23.157 "r_mbytes_per_sec": 0, 00:14:23.157 "w_mbytes_per_sec": 0 00:14:23.157 }, 00:14:23.157 "claimed": true, 00:14:23.157 "claim_type": "exclusive_write", 00:14:23.157 "zoned": false, 00:14:23.157 "supported_io_types": { 00:14:23.157 "read": true, 00:14:23.157 "write": true, 00:14:23.157 "unmap": true, 00:14:23.157 "write_zeroes": true, 00:14:23.157 "flush": true, 00:14:23.157 "reset": true, 00:14:23.157 "compare": false, 00:14:23.157 "compare_and_write": false, 00:14:23.157 "abort": true, 00:14:23.157 "nvme_admin": false, 00:14:23.157 "nvme_io": false 00:14:23.157 }, 00:14:23.157 "memory_domains": [ 00:14:23.158 { 00:14:23.158 "dma_device_id": "system", 00:14:23.158 "dma_device_type": 1 00:14:23.158 }, 00:14:23.158 { 00:14:23.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.158 "dma_device_type": 2 00:14:23.158 } 00:14:23.158 ], 00:14:23.158 "driver_specific": { 00:14:23.158 "passthru": { 00:14:23.158 "name": "pt2", 00:14:23.158 "base_bdev_name": "malloc2" 00:14:23.158 } 00:14:23.158 } 00:14:23.158 }' 00:14:23.158 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.158 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:23.158 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:23.158 11:56:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:14:23.416 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:23.683 [2024-07-21 11:56:22.531916] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.008 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=94806acb-ca41-4c2d-ad85-61f44b502bd7 00:14:24.008 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 94806acb-ca41-4c2d-ad85-61f44b502bd7 ']' 00:14:24.008 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:24.008 [2024-07-21 11:56:22.831730] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:24.008 [2024-07-21 11:56:22.831998] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:24.008 [2024-07-21 11:56:22.832254] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:24.008 [2024-07-21 11:56:22.832428] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:24.008 [2024-07-21 11:56:22.832548] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:14:24.299 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.299 11:56:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:14:24.299 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:14:24.299 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:14:24.299 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.299 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:24.564 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:14:24.564 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:24.822 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:24.822 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:25.080 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:14:25.080 11:56:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:25.080 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:14:25.080 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:25.081 11:56:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:25.339 [2024-07-21 11:56:24.055955] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:25.339 [2024-07-21 11:56:24.058250] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:25.339 [2024-07-21 11:56:24.058480] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:25.339 [2024-07-21 11:56:24.058751] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:25.339 [2024-07-21 11:56:24.058915] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:25.339 [2024-07-21 11:56:24.059062] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:14:25.339 request: 00:14:25.339 { 00:14:25.339 "name": "raid_bdev1", 00:14:25.339 "raid_level": "raid0", 00:14:25.339 "base_bdevs": [ 00:14:25.339 "malloc1", 00:14:25.339 "malloc2" 00:14:25.339 ], 00:14:25.339 "superblock": false, 00:14:25.339 "strip_size_kb": 64, 00:14:25.339 "method": "bdev_raid_create", 00:14:25.339 "req_id": 1 00:14:25.339 } 00:14:25.339 Got JSON-RPC error response 00:14:25.339 response: 00:14:25.339 { 00:14:25.339 "code": -17, 00:14:25.339 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:25.339 } 00:14:25.339 11:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:14:25.339 11:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.339 11:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.339 11:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.339 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.339 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:14:25.597 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:14:25.597 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:14:25.597 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:25.855 [2024-07-21 11:56:24.489736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:25.855 [2024-07-21 11:56:24.490082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.856 [2024-07-21 11:56:24.490277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:25.856 [2024-07-21 11:56:24.490404] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.856 [2024-07-21 11:56:24.492982] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.856 [2024-07-21 11:56:24.493180] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:25.856 [2024-07-21 11:56:24.493398] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:25.856 [2024-07-21 11:56:24.493573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:25.856 pt1 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.856 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.114 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.114 "name": "raid_bdev1", 00:14:26.114 "uuid": "94806acb-ca41-4c2d-ad85-61f44b502bd7", 00:14:26.114 "strip_size_kb": 64, 00:14:26.114 "state": "configuring", 00:14:26.114 "raid_level": "raid0", 00:14:26.114 "superblock": true, 00:14:26.114 "num_base_bdevs": 2, 00:14:26.114 "num_base_bdevs_discovered": 1, 00:14:26.114 "num_base_bdevs_operational": 2, 00:14:26.114 "base_bdevs_list": [ 00:14:26.114 { 00:14:26.114 "name": "pt1", 00:14:26.114 "uuid": "b80c0d85-52c3-5752-92b9-3369d6ea4c69", 00:14:26.114 "is_configured": true, 00:14:26.114 "data_offset": 2048, 00:14:26.114 "data_size": 63488 00:14:26.114 }, 00:14:26.114 { 00:14:26.114 "name": null, 00:14:26.114 "uuid": "5e451bcb-9765-5d44-9175-cc73016a06a2", 00:14:26.114 "is_configured": false, 00:14:26.114 "data_offset": 2048, 00:14:26.114 "data_size": 63488 00:14:26.114 } 00:14:26.114 ] 00:14:26.114 }' 00:14:26.114 11:56:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.114 11:56:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.680 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:14:26.680 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:14:26.680 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:26.680 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:26.939 [2024-07-21 11:56:25.550130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:26.939 [2024-07-21 11:56:25.550470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:26.939 [2024-07-21 11:56:25.550669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:26.939 [2024-07-21 11:56:25.550805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:26.939 [2024-07-21 11:56:25.551355] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:26.939 [2024-07-21 11:56:25.551545] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.939 [2024-07-21 11:56:25.551746] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:26.939 [2024-07-21 11:56:25.551925] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.939 [2024-07-21 11:56:25.552231] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:14:26.939 [2024-07-21 11:56:25.552353] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:26.939 [2024-07-21 11:56:25.552466] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:14:26.939 [2024-07-21 11:56:25.552836] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:14:26.939 [2024-07-21 11:56:25.552959] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:14:26.939 [2024-07-21 11:56:25.553167] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.939 pt2 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.939 "name": "raid_bdev1", 00:14:26.939 "uuid": "94806acb-ca41-4c2d-ad85-61f44b502bd7", 00:14:26.939 "strip_size_kb": 64, 00:14:26.939 "state": "online", 00:14:26.939 "raid_level": "raid0", 00:14:26.939 "superblock": true, 00:14:26.939 "num_base_bdevs": 2, 00:14:26.939 "num_base_bdevs_discovered": 2, 00:14:26.939 "num_base_bdevs_operational": 2, 00:14:26.939 "base_bdevs_list": [ 00:14:26.939 { 00:14:26.939 "name": "pt1", 00:14:26.939 "uuid": "b80c0d85-52c3-5752-92b9-3369d6ea4c69", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 2048, 00:14:26.939 "data_size": 63488 00:14:26.939 }, 00:14:26.939 { 00:14:26.939 "name": "pt2", 00:14:26.939 "uuid": "5e451bcb-9765-5d44-9175-cc73016a06a2", 00:14:26.939 "is_configured": true, 00:14:26.939 "data_offset": 2048, 00:14:26.939 "data_size": 63488 00:14:26.939 } 00:14:26.939 ] 00:14:26.939 }' 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.939 11:56:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.873 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:14:27.874 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:14:27.874 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:27.874 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:27.874 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:27.874 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:27.874 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:27.874 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:28.132 [2024-07-21 11:56:26.750658] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:28.132 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:28.132 "name": "raid_bdev1", 00:14:28.132 "aliases": [ 00:14:28.132 "94806acb-ca41-4c2d-ad85-61f44b502bd7" 00:14:28.132 ], 00:14:28.132 "product_name": "Raid Volume", 00:14:28.132 "block_size": 512, 00:14:28.132 "num_blocks": 126976, 00:14:28.132 "uuid": "94806acb-ca41-4c2d-ad85-61f44b502bd7", 00:14:28.132 "assigned_rate_limits": { 00:14:28.132 "rw_ios_per_sec": 0, 00:14:28.132 "rw_mbytes_per_sec": 0, 00:14:28.132 "r_mbytes_per_sec": 0, 00:14:28.132 "w_mbytes_per_sec": 0 00:14:28.132 }, 00:14:28.132 "claimed": false, 00:14:28.132 "zoned": false, 00:14:28.132 "supported_io_types": { 00:14:28.132 "read": true, 00:14:28.132 "write": true, 00:14:28.132 "unmap": true, 00:14:28.132 "write_zeroes": true, 00:14:28.132 "flush": true, 00:14:28.132 "reset": true, 00:14:28.132 "compare": false, 00:14:28.132 "compare_and_write": false, 00:14:28.132 "abort": false, 00:14:28.132 "nvme_admin": false, 00:14:28.132 "nvme_io": false 00:14:28.132 }, 00:14:28.132 "memory_domains": [ 00:14:28.132 { 00:14:28.132 "dma_device_id": "system", 00:14:28.132 "dma_device_type": 1 00:14:28.132 }, 00:14:28.132 { 00:14:28.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.132 "dma_device_type": 2 00:14:28.132 }, 00:14:28.132 { 00:14:28.132 "dma_device_id": "system", 00:14:28.132 "dma_device_type": 1 00:14:28.132 }, 00:14:28.132 { 00:14:28.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.132 "dma_device_type": 2 00:14:28.132 } 00:14:28.132 ], 00:14:28.132 "driver_specific": { 00:14:28.132 "raid": { 00:14:28.132 "uuid": "94806acb-ca41-4c2d-ad85-61f44b502bd7", 00:14:28.132 "strip_size_kb": 64, 00:14:28.132 "state": "online", 00:14:28.132 "raid_level": "raid0", 00:14:28.132 "superblock": true, 00:14:28.132 "num_base_bdevs": 2, 00:14:28.132 "num_base_bdevs_discovered": 2, 00:14:28.132 "num_base_bdevs_operational": 2, 00:14:28.132 "base_bdevs_list": [ 00:14:28.132 { 00:14:28.132 "name": "pt1", 00:14:28.132 "uuid": "b80c0d85-52c3-5752-92b9-3369d6ea4c69", 00:14:28.132 "is_configured": true, 00:14:28.132 "data_offset": 2048, 00:14:28.132 "data_size": 63488 00:14:28.132 }, 00:14:28.132 { 00:14:28.132 "name": "pt2", 00:14:28.132 "uuid": "5e451bcb-9765-5d44-9175-cc73016a06a2", 00:14:28.132 "is_configured": true, 00:14:28.132 "data_offset": 2048, 00:14:28.132 "data_size": 63488 00:14:28.132 } 00:14:28.132 ] 00:14:28.132 } 00:14:28.132 } 00:14:28.132 }' 00:14:28.132 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:28.132 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:14:28.132 pt2' 00:14:28.133 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.133 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:14:28.133 11:56:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:28.390 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:28.391 "name": "pt1", 00:14:28.391 "aliases": [ 00:14:28.391 "b80c0d85-52c3-5752-92b9-3369d6ea4c69" 00:14:28.391 ], 00:14:28.391 "product_name": "passthru", 00:14:28.391 "block_size": 512, 00:14:28.391 "num_blocks": 65536, 00:14:28.391 "uuid": "b80c0d85-52c3-5752-92b9-3369d6ea4c69", 00:14:28.391 "assigned_rate_limits": { 00:14:28.391 "rw_ios_per_sec": 0, 00:14:28.391 "rw_mbytes_per_sec": 0, 00:14:28.391 "r_mbytes_per_sec": 0, 00:14:28.391 "w_mbytes_per_sec": 0 00:14:28.391 }, 00:14:28.391 "claimed": true, 00:14:28.391 "claim_type": "exclusive_write", 00:14:28.391 "zoned": false, 00:14:28.391 "supported_io_types": { 00:14:28.391 "read": true, 00:14:28.391 "write": true, 00:14:28.391 "unmap": true, 00:14:28.391 "write_zeroes": true, 00:14:28.391 "flush": true, 00:14:28.391 "reset": true, 00:14:28.391 "compare": false, 00:14:28.391 "compare_and_write": false, 00:14:28.391 "abort": true, 00:14:28.391 "nvme_admin": false, 00:14:28.391 "nvme_io": false 00:14:28.391 }, 00:14:28.391 "memory_domains": [ 00:14:28.391 { 00:14:28.391 "dma_device_id": "system", 00:14:28.391 "dma_device_type": 1 00:14:28.391 }, 00:14:28.391 { 00:14:28.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.391 "dma_device_type": 2 00:14:28.391 } 00:14:28.391 ], 00:14:28.391 "driver_specific": { 00:14:28.391 "passthru": { 00:14:28.391 "name": "pt1", 00:14:28.391 "base_bdev_name": "malloc1" 00:14:28.391 } 00:14:28.391 } 00:14:28.391 }' 00:14:28.391 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.391 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:28.391 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:28.391 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.391 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:14:28.649 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:28.908 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:28.908 "name": "pt2", 00:14:28.908 "aliases": [ 00:14:28.908 "5e451bcb-9765-5d44-9175-cc73016a06a2" 00:14:28.908 ], 00:14:28.908 "product_name": "passthru", 00:14:28.908 "block_size": 512, 00:14:28.908 "num_blocks": 65536, 00:14:28.908 "uuid": "5e451bcb-9765-5d44-9175-cc73016a06a2", 00:14:28.908 "assigned_rate_limits": { 00:14:28.908 "rw_ios_per_sec": 0, 00:14:28.908 "rw_mbytes_per_sec": 0, 00:14:28.908 "r_mbytes_per_sec": 0, 00:14:28.908 "w_mbytes_per_sec": 0 00:14:28.908 }, 00:14:28.908 "claimed": true, 00:14:28.908 "claim_type": "exclusive_write", 00:14:28.908 "zoned": false, 00:14:28.908 "supported_io_types": { 00:14:28.908 "read": true, 00:14:28.908 "write": true, 00:14:28.908 "unmap": true, 00:14:28.908 "write_zeroes": true, 00:14:28.908 "flush": true, 00:14:28.908 "reset": true, 00:14:28.908 "compare": false, 00:14:28.908 "compare_and_write": false, 00:14:28.908 "abort": true, 00:14:28.908 "nvme_admin": false, 00:14:28.908 "nvme_io": false 00:14:28.908 }, 00:14:28.908 "memory_domains": [ 00:14:28.908 { 00:14:28.908 "dma_device_id": "system", 00:14:28.908 "dma_device_type": 1 00:14:28.908 }, 00:14:28.908 { 00:14:28.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:28.908 "dma_device_type": 2 00:14:28.908 } 00:14:28.908 ], 00:14:28.908 "driver_specific": { 00:14:28.908 "passthru": { 00:14:28.908 "name": "pt2", 00:14:28.908 "base_bdev_name": "malloc2" 00:14:28.908 } 00:14:28.908 } 00:14:28.908 }' 00:14:28.908 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.166 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:29.166 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:29.166 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.166 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:29.166 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:29.166 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.166 11:56:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:29.425 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:29.425 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.425 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:29.425 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:29.425 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:29.425 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:14:29.683 [2024-07-21 11:56:28.367043] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 94806acb-ca41-4c2d-ad85-61f44b502bd7 '!=' 94806acb-ca41-4c2d-ad85-61f44b502bd7 ']' 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 131935 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 131935 ']' 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 131935 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 131935 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 131935' 00:14:29.683 killing process with pid 131935 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 131935 00:14:29.683 [2024-07-21 11:56:28.414461] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.683 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 131935 00:14:29.683 [2024-07-21 11:56:28.414771] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.683 [2024-07-21 11:56:28.414945] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.683 [2024-07-21 11:56:28.415078] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:14:29.683 [2024-07-21 11:56:28.438270] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:29.942 ************************************ 00:14:29.942 END TEST raid_superblock_test 00:14:29.942 ************************************ 00:14:29.942 11:56:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:14:29.942 00:14:29.942 real 0m11.149s 00:14:29.942 user 0m20.571s 00:14:29.942 sys 0m1.369s 00:14:29.942 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:29.942 11:56:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.942 11:56:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:14:29.943 11:56:28 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:29.943 11:56:28 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:29.943 11:56:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:29.943 ************************************ 00:14:29.943 START TEST raid_read_error_test 00:14:29.943 ************************************ 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 read 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ixgbRDbiAV 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132298 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132298 /var/tmp/spdk-raid.sock 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 132298 ']' 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:29.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:29.943 11:56:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.201 [2024-07-21 11:56:28.810826] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:30.201 [2024-07-21 11:56:28.811306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132298 ] 00:14:30.201 [2024-07-21 11:56:28.976448] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.460 [2024-07-21 11:56:29.069571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.460 [2024-07-21 11:56:29.125968] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.027 11:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:31.027 11:56:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:14:31.027 11:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:31.027 11:56:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:31.286 BaseBdev1_malloc 00:14:31.286 11:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:31.544 true 00:14:31.544 11:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:31.803 [2024-07-21 11:56:30.554268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:31.803 [2024-07-21 11:56:30.554763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.803 [2024-07-21 11:56:30.554984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:31.803 [2024-07-21 11:56:30.555166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.803 [2024-07-21 11:56:30.557957] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.803 [2024-07-21 11:56:30.558152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:31.803 BaseBdev1 00:14:31.803 11:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:31.803 11:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.062 BaseBdev2_malloc 00:14:32.062 11:56:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:32.320 true 00:14:32.320 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:32.578 [2024-07-21 11:56:31.273530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:32.578 [2024-07-21 11:56:31.273911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.578 [2024-07-21 11:56:31.274150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:32.578 [2024-07-21 11:56:31.274322] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.578 [2024-07-21 11:56:31.277121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.578 [2024-07-21 11:56:31.277318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:32.578 BaseBdev2 00:14:32.578 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:32.837 [2024-07-21 11:56:31.493808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.837 [2024-07-21 11:56:31.496518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.837 [2024-07-21 11:56:31.496947] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:14:32.837 [2024-07-21 11:56:31.497083] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:32.837 [2024-07-21 11:56:31.497312] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:32.837 [2024-07-21 11:56:31.497854] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:14:32.837 [2024-07-21 11:56:31.497989] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:14:32.837 [2024-07-21 11:56:31.498324] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.837 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.095 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.095 "name": "raid_bdev1", 00:14:33.095 "uuid": "df735148-35df-4b61-935c-27fd6a8ef8e8", 00:14:33.095 "strip_size_kb": 64, 00:14:33.095 "state": "online", 00:14:33.095 "raid_level": "raid0", 00:14:33.095 "superblock": true, 00:14:33.095 "num_base_bdevs": 2, 00:14:33.095 "num_base_bdevs_discovered": 2, 00:14:33.095 "num_base_bdevs_operational": 2, 00:14:33.095 "base_bdevs_list": [ 00:14:33.095 { 00:14:33.095 "name": "BaseBdev1", 00:14:33.095 "uuid": "6694e4a9-9ec2-55ea-92f6-e42b501feb25", 00:14:33.095 "is_configured": true, 00:14:33.095 "data_offset": 2048, 00:14:33.095 "data_size": 63488 00:14:33.095 }, 00:14:33.095 { 00:14:33.095 "name": "BaseBdev2", 00:14:33.095 "uuid": "d7ce2d69-ddfb-5306-ac9a-fcecb647186c", 00:14:33.095 "is_configured": true, 00:14:33.095 "data_offset": 2048, 00:14:33.095 "data_size": 63488 00:14:33.095 } 00:14:33.095 ] 00:14:33.095 }' 00:14:33.095 11:56:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.095 11:56:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.662 11:56:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:33.662 11:56:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:33.662 [2024-07-21 11:56:32.502966] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:34.597 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:34.854 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:34.855 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.855 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.112 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.112 "name": "raid_bdev1", 00:14:35.112 "uuid": "df735148-35df-4b61-935c-27fd6a8ef8e8", 00:14:35.112 "strip_size_kb": 64, 00:14:35.112 "state": "online", 00:14:35.112 "raid_level": "raid0", 00:14:35.112 "superblock": true, 00:14:35.112 "num_base_bdevs": 2, 00:14:35.112 "num_base_bdevs_discovered": 2, 00:14:35.112 "num_base_bdevs_operational": 2, 00:14:35.112 "base_bdevs_list": [ 00:14:35.112 { 00:14:35.112 "name": "BaseBdev1", 00:14:35.112 "uuid": "6694e4a9-9ec2-55ea-92f6-e42b501feb25", 00:14:35.112 "is_configured": true, 00:14:35.112 "data_offset": 2048, 00:14:35.112 "data_size": 63488 00:14:35.112 }, 00:14:35.112 { 00:14:35.112 "name": "BaseBdev2", 00:14:35.112 "uuid": "d7ce2d69-ddfb-5306-ac9a-fcecb647186c", 00:14:35.112 "is_configured": true, 00:14:35.112 "data_offset": 2048, 00:14:35.112 "data_size": 63488 00:14:35.112 } 00:14:35.112 ] 00:14:35.112 }' 00:14:35.112 11:56:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.112 11:56:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:36.045 [2024-07-21 11:56:34.805461] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.045 [2024-07-21 11:56:34.805779] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:36.045 [2024-07-21 11:56:34.808806] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:36.045 [2024-07-21 11:56:34.809004] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.045 [2024-07-21 11:56:34.809080] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:36.045 [2024-07-21 11:56:34.809188] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:14:36.045 0 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132298 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 132298 ']' 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 132298 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132298 00:14:36.045 killing process with pid 132298 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132298' 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 132298 00:14:36.045 11:56:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 132298 00:14:36.045 [2024-07-21 11:56:34.855278] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:36.045 [2024-07-21 11:56:34.873142] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ixgbRDbiAV 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:14:36.302 00:14:36.302 real 0m6.401s 00:14:36.302 user 0m10.331s 00:14:36.302 sys 0m0.775s 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:36.302 11:56:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.302 ************************************ 00:14:36.302 END TEST raid_read_error_test 00:14:36.302 ************************************ 00:14:36.560 11:56:35 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:14:36.560 11:56:35 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:36.560 11:56:35 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:36.560 11:56:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:36.560 ************************************ 00:14:36.560 START TEST raid_write_error_test 00:14:36.560 ************************************ 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 2 write 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.fIug4MmryH 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=132481 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 132481 /var/tmp/spdk-raid.sock 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 132481 ']' 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:36.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:36.560 11:56:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.560 [2024-07-21 11:56:35.274394] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:36.560 [2024-07-21 11:56:35.275001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132481 ] 00:14:36.818 [2024-07-21 11:56:35.442472] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.818 [2024-07-21 11:56:35.534557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.818 [2024-07-21 11:56:35.592011] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:37.751 11:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:37.751 11:56:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:14:37.751 11:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:37.751 11:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:37.751 BaseBdev1_malloc 00:14:37.751 11:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:38.008 true 00:14:38.008 11:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:38.266 [2024-07-21 11:56:36.947474] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:38.266 [2024-07-21 11:56:36.947833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.266 [2024-07-21 11:56:36.948021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:38.266 [2024-07-21 11:56:36.948192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.266 [2024-07-21 11:56:36.951168] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.266 [2024-07-21 11:56:36.951360] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:38.266 BaseBdev1 00:14:38.266 11:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:38.266 11:56:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:38.524 BaseBdev2_malloc 00:14:38.524 11:56:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:38.783 true 00:14:38.783 11:56:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:39.041 [2024-07-21 11:56:37.714691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:39.041 [2024-07-21 11:56:37.715207] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.041 [2024-07-21 11:56:37.715472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:14:39.041 [2024-07-21 11:56:37.715635] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.041 [2024-07-21 11:56:37.718383] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.041 [2024-07-21 11:56:37.718604] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:39.041 BaseBdev2 00:14:39.041 11:56:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:14:39.298 [2024-07-21 11:56:37.983127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:39.298 [2024-07-21 11:56:37.985769] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.298 [2024-07-21 11:56:37.986153] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:14:39.298 [2024-07-21 11:56:37.986309] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.298 [2024-07-21 11:56:37.986609] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:14:39.298 [2024-07-21 11:56:37.987237] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:14:39.298 [2024-07-21 11:56:37.987372] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:14:39.298 [2024-07-21 11:56:37.987702] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.298 11:56:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.298 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.557 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.557 "name": "raid_bdev1", 00:14:39.557 "uuid": "5e70585b-ea89-4739-830f-86fde86abab2", 00:14:39.557 "strip_size_kb": 64, 00:14:39.557 "state": "online", 00:14:39.557 "raid_level": "raid0", 00:14:39.557 "superblock": true, 00:14:39.557 "num_base_bdevs": 2, 00:14:39.557 "num_base_bdevs_discovered": 2, 00:14:39.557 "num_base_bdevs_operational": 2, 00:14:39.557 "base_bdevs_list": [ 00:14:39.557 { 00:14:39.557 "name": "BaseBdev1", 00:14:39.557 "uuid": "fd34e314-9ddb-5156-9808-cf5a4f44a706", 00:14:39.557 "is_configured": true, 00:14:39.557 "data_offset": 2048, 00:14:39.557 "data_size": 63488 00:14:39.557 }, 00:14:39.557 { 00:14:39.557 "name": "BaseBdev2", 00:14:39.557 "uuid": "8a8d7ad2-a7bd-5760-9d37-d98652381ccc", 00:14:39.557 "is_configured": true, 00:14:39.557 "data_offset": 2048, 00:14:39.557 "data_size": 63488 00:14:39.557 } 00:14:39.557 ] 00:14:39.557 }' 00:14:39.557 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.557 11:56:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.123 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:40.123 11:56:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:40.123 [2024-07-21 11:56:38.972326] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:41.057 11:56:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.316 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.882 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.882 "name": "raid_bdev1", 00:14:41.882 "uuid": "5e70585b-ea89-4739-830f-86fde86abab2", 00:14:41.882 "strip_size_kb": 64, 00:14:41.882 "state": "online", 00:14:41.882 "raid_level": "raid0", 00:14:41.882 "superblock": true, 00:14:41.882 "num_base_bdevs": 2, 00:14:41.882 "num_base_bdevs_discovered": 2, 00:14:41.882 "num_base_bdevs_operational": 2, 00:14:41.882 "base_bdevs_list": [ 00:14:41.882 { 00:14:41.882 "name": "BaseBdev1", 00:14:41.882 "uuid": "fd34e314-9ddb-5156-9808-cf5a4f44a706", 00:14:41.882 "is_configured": true, 00:14:41.882 "data_offset": 2048, 00:14:41.882 "data_size": 63488 00:14:41.882 }, 00:14:41.882 { 00:14:41.882 "name": "BaseBdev2", 00:14:41.882 "uuid": "8a8d7ad2-a7bd-5760-9d37-d98652381ccc", 00:14:41.882 "is_configured": true, 00:14:41.882 "data_offset": 2048, 00:14:41.882 "data_size": 63488 00:14:41.882 } 00:14:41.882 ] 00:14:41.882 }' 00:14:41.882 11:56:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.882 11:56:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:42.504 [2024-07-21 11:56:41.314458] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.504 [2024-07-21 11:56:41.314858] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.504 [2024-07-21 11:56:41.317675] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.504 [2024-07-21 11:56:41.317867] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.504 [2024-07-21 11:56:41.317942] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.504 [2024-07-21 11:56:41.318137] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:14:42.504 0 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 132481 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 132481 ']' 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 132481 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132481 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132481' 00:14:42.504 killing process with pid 132481 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 132481 00:14:42.504 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 132481 00:14:42.504 [2024-07-21 11:56:41.358268] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:42.762 [2024-07-21 11:56:41.375089] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.fIug4MmryH 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:14:43.020 00:14:43.020 real 0m6.438s 00:14:43.020 user 0m10.354s 00:14:43.020 sys 0m0.816s 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:43.020 11:56:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.020 ************************************ 00:14:43.020 END TEST raid_write_error_test 00:14:43.020 ************************************ 00:14:43.020 11:56:41 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:43.020 11:56:41 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:43.020 11:56:41 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:43.020 11:56:41 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:43.020 11:56:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:43.020 ************************************ 00:14:43.020 START TEST raid_state_function_test 00:14:43.020 ************************************ 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 false 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=132664 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 132664' 00:14:43.020 Process raid pid: 132664 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 132664 /var/tmp/spdk-raid.sock 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 132664 ']' 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:43.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:43.020 11:56:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.020 [2024-07-21 11:56:41.765789] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:43.020 [2024-07-21 11:56:41.766359] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.278 [2024-07-21 11:56:41.938593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.278 [2024-07-21 11:56:42.030513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.278 [2024-07-21 11:56:42.083640] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:43.844 11:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:43.844 11:56:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:14:43.844 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:44.102 [2024-07-21 11:56:42.954575] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:44.102 [2024-07-21 11:56:42.954901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:44.102 [2024-07-21 11:56:42.955040] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:44.102 [2024-07-21 11:56:42.955111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.359 11:56:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.616 11:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:44.616 "name": "Existed_Raid", 00:14:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.616 "strip_size_kb": 64, 00:14:44.616 "state": "configuring", 00:14:44.616 "raid_level": "concat", 00:14:44.616 "superblock": false, 00:14:44.616 "num_base_bdevs": 2, 00:14:44.616 "num_base_bdevs_discovered": 0, 00:14:44.616 "num_base_bdevs_operational": 2, 00:14:44.616 "base_bdevs_list": [ 00:14:44.616 { 00:14:44.616 "name": "BaseBdev1", 00:14:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.616 "is_configured": false, 00:14:44.616 "data_offset": 0, 00:14:44.616 "data_size": 0 00:14:44.616 }, 00:14:44.616 { 00:14:44.616 "name": "BaseBdev2", 00:14:44.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.616 "is_configured": false, 00:14:44.616 "data_offset": 0, 00:14:44.616 "data_size": 0 00:14:44.616 } 00:14:44.616 ] 00:14:44.616 }' 00:14:44.616 11:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:44.616 11:56:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.182 11:56:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:45.440 [2024-07-21 11:56:44.127008] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:45.440 [2024-07-21 11:56:44.127397] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:45.440 11:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:45.699 [2024-07-21 11:56:44.399028] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:45.699 [2024-07-21 11:56:44.399430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:45.699 [2024-07-21 11:56:44.399548] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:45.699 [2024-07-21 11:56:44.399623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:45.699 11:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.957 [2024-07-21 11:56:44.626143] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:45.957 BaseBdev1 00:14:45.957 11:56:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:45.957 11:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:45.957 11:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:45.957 11:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:45.957 11:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:45.957 11:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:45.957 11:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.215 11:56:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:46.473 [ 00:14:46.473 { 00:14:46.473 "name": "BaseBdev1", 00:14:46.473 "aliases": [ 00:14:46.473 "31f054d5-0967-4740-9fe0-472e1cb94acb" 00:14:46.473 ], 00:14:46.473 "product_name": "Malloc disk", 00:14:46.473 "block_size": 512, 00:14:46.473 "num_blocks": 65536, 00:14:46.473 "uuid": "31f054d5-0967-4740-9fe0-472e1cb94acb", 00:14:46.473 "assigned_rate_limits": { 00:14:46.473 "rw_ios_per_sec": 0, 00:14:46.473 "rw_mbytes_per_sec": 0, 00:14:46.473 "r_mbytes_per_sec": 0, 00:14:46.473 "w_mbytes_per_sec": 0 00:14:46.473 }, 00:14:46.473 "claimed": true, 00:14:46.473 "claim_type": "exclusive_write", 00:14:46.473 "zoned": false, 00:14:46.473 "supported_io_types": { 00:14:46.473 "read": true, 00:14:46.473 "write": true, 00:14:46.473 "unmap": true, 00:14:46.473 "write_zeroes": true, 00:14:46.473 "flush": true, 00:14:46.473 "reset": true, 00:14:46.473 "compare": false, 00:14:46.473 "compare_and_write": false, 00:14:46.473 "abort": true, 00:14:46.473 "nvme_admin": false, 00:14:46.473 "nvme_io": false 00:14:46.473 }, 00:14:46.473 "memory_domains": [ 00:14:46.473 { 00:14:46.473 "dma_device_id": "system", 00:14:46.473 "dma_device_type": 1 00:14:46.473 }, 00:14:46.473 { 00:14:46.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.473 "dma_device_type": 2 00:14:46.473 } 00:14:46.473 ], 00:14:46.473 "driver_specific": {} 00:14:46.473 } 00:14:46.473 ] 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.473 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.731 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.731 "name": "Existed_Raid", 00:14:46.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.731 "strip_size_kb": 64, 00:14:46.731 "state": "configuring", 00:14:46.731 "raid_level": "concat", 00:14:46.731 "superblock": false, 00:14:46.731 "num_base_bdevs": 2, 00:14:46.731 "num_base_bdevs_discovered": 1, 00:14:46.731 "num_base_bdevs_operational": 2, 00:14:46.731 "base_bdevs_list": [ 00:14:46.731 { 00:14:46.731 "name": "BaseBdev1", 00:14:46.731 "uuid": "31f054d5-0967-4740-9fe0-472e1cb94acb", 00:14:46.731 "is_configured": true, 00:14:46.731 "data_offset": 0, 00:14:46.731 "data_size": 65536 00:14:46.731 }, 00:14:46.731 { 00:14:46.731 "name": "BaseBdev2", 00:14:46.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.731 "is_configured": false, 00:14:46.731 "data_offset": 0, 00:14:46.731 "data_size": 0 00:14:46.731 } 00:14:46.731 ] 00:14:46.731 }' 00:14:46.731 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.731 11:56:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.298 11:56:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:47.557 [2024-07-21 11:56:46.246638] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:47.557 [2024-07-21 11:56:46.246978] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:47.557 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:47.816 [2024-07-21 11:56:46.522701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.816 [2024-07-21 11:56:46.525258] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.816 [2024-07-21 11:56:46.525475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.816 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.074 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.074 "name": "Existed_Raid", 00:14:48.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.074 "strip_size_kb": 64, 00:14:48.074 "state": "configuring", 00:14:48.074 "raid_level": "concat", 00:14:48.074 "superblock": false, 00:14:48.074 "num_base_bdevs": 2, 00:14:48.074 "num_base_bdevs_discovered": 1, 00:14:48.074 "num_base_bdevs_operational": 2, 00:14:48.074 "base_bdevs_list": [ 00:14:48.074 { 00:14:48.074 "name": "BaseBdev1", 00:14:48.074 "uuid": "31f054d5-0967-4740-9fe0-472e1cb94acb", 00:14:48.074 "is_configured": true, 00:14:48.074 "data_offset": 0, 00:14:48.074 "data_size": 65536 00:14:48.074 }, 00:14:48.074 { 00:14:48.074 "name": "BaseBdev2", 00:14:48.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.074 "is_configured": false, 00:14:48.074 "data_offset": 0, 00:14:48.074 "data_size": 0 00:14:48.074 } 00:14:48.074 ] 00:14:48.074 }' 00:14:48.074 11:56:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.074 11:56:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.639 11:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:48.898 [2024-07-21 11:56:47.723313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:48.898 [2024-07-21 11:56:47.723816] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:14:48.898 [2024-07-21 11:56:47.724073] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:48.898 [2024-07-21 11:56:47.724571] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:48.898 [2024-07-21 11:56:47.725553] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:14:48.898 [2024-07-21 11:56:47.725795] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:14:48.898 [2024-07-21 11:56:47.726467] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.898 BaseBdev2 00:14:48.898 11:56:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:48.898 11:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:14:48.898 11:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:48.898 11:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:14:48.898 11:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:48.898 11:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:48.898 11:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.156 11:56:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.413 [ 00:14:49.413 { 00:14:49.413 "name": "BaseBdev2", 00:14:49.413 "aliases": [ 00:14:49.413 "cefaf859-0fb0-4f36-a3df-4ded61b15bf0" 00:14:49.414 ], 00:14:49.414 "product_name": "Malloc disk", 00:14:49.414 "block_size": 512, 00:14:49.414 "num_blocks": 65536, 00:14:49.414 "uuid": "cefaf859-0fb0-4f36-a3df-4ded61b15bf0", 00:14:49.414 "assigned_rate_limits": { 00:14:49.414 "rw_ios_per_sec": 0, 00:14:49.414 "rw_mbytes_per_sec": 0, 00:14:49.414 "r_mbytes_per_sec": 0, 00:14:49.414 "w_mbytes_per_sec": 0 00:14:49.414 }, 00:14:49.414 "claimed": true, 00:14:49.414 "claim_type": "exclusive_write", 00:14:49.414 "zoned": false, 00:14:49.414 "supported_io_types": { 00:14:49.414 "read": true, 00:14:49.414 "write": true, 00:14:49.414 "unmap": true, 00:14:49.414 "write_zeroes": true, 00:14:49.414 "flush": true, 00:14:49.414 "reset": true, 00:14:49.414 "compare": false, 00:14:49.414 "compare_and_write": false, 00:14:49.414 "abort": true, 00:14:49.414 "nvme_admin": false, 00:14:49.414 "nvme_io": false 00:14:49.414 }, 00:14:49.414 "memory_domains": [ 00:14:49.414 { 00:14:49.414 "dma_device_id": "system", 00:14:49.414 "dma_device_type": 1 00:14:49.414 }, 00:14:49.414 { 00:14:49.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.414 "dma_device_type": 2 00:14:49.414 } 00:14:49.414 ], 00:14:49.414 "driver_specific": {} 00:14:49.414 } 00:14:49.414 ] 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.414 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.672 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.672 "name": "Existed_Raid", 00:14:49.672 "uuid": "3a050d2f-8ee6-43de-894d-1c87f6bae107", 00:14:49.672 "strip_size_kb": 64, 00:14:49.672 "state": "online", 00:14:49.672 "raid_level": "concat", 00:14:49.672 "superblock": false, 00:14:49.672 "num_base_bdevs": 2, 00:14:49.672 "num_base_bdevs_discovered": 2, 00:14:49.672 "num_base_bdevs_operational": 2, 00:14:49.672 "base_bdevs_list": [ 00:14:49.672 { 00:14:49.672 "name": "BaseBdev1", 00:14:49.672 "uuid": "31f054d5-0967-4740-9fe0-472e1cb94acb", 00:14:49.672 "is_configured": true, 00:14:49.672 "data_offset": 0, 00:14:49.672 "data_size": 65536 00:14:49.672 }, 00:14:49.672 { 00:14:49.672 "name": "BaseBdev2", 00:14:49.672 "uuid": "cefaf859-0fb0-4f36-a3df-4ded61b15bf0", 00:14:49.672 "is_configured": true, 00:14:49.672 "data_offset": 0, 00:14:49.672 "data_size": 65536 00:14:49.672 } 00:14:49.672 ] 00:14:49.672 }' 00:14:49.672 11:56:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.672 11:56:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:50.237 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:50.495 [2024-07-21 11:56:49.291536] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:50.495 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:50.495 "name": "Existed_Raid", 00:14:50.495 "aliases": [ 00:14:50.495 "3a050d2f-8ee6-43de-894d-1c87f6bae107" 00:14:50.495 ], 00:14:50.495 "product_name": "Raid Volume", 00:14:50.495 "block_size": 512, 00:14:50.495 "num_blocks": 131072, 00:14:50.495 "uuid": "3a050d2f-8ee6-43de-894d-1c87f6bae107", 00:14:50.495 "assigned_rate_limits": { 00:14:50.495 "rw_ios_per_sec": 0, 00:14:50.495 "rw_mbytes_per_sec": 0, 00:14:50.495 "r_mbytes_per_sec": 0, 00:14:50.495 "w_mbytes_per_sec": 0 00:14:50.495 }, 00:14:50.495 "claimed": false, 00:14:50.495 "zoned": false, 00:14:50.495 "supported_io_types": { 00:14:50.495 "read": true, 00:14:50.495 "write": true, 00:14:50.495 "unmap": true, 00:14:50.495 "write_zeroes": true, 00:14:50.495 "flush": true, 00:14:50.495 "reset": true, 00:14:50.495 "compare": false, 00:14:50.495 "compare_and_write": false, 00:14:50.495 "abort": false, 00:14:50.495 "nvme_admin": false, 00:14:50.495 "nvme_io": false 00:14:50.495 }, 00:14:50.495 "memory_domains": [ 00:14:50.495 { 00:14:50.495 "dma_device_id": "system", 00:14:50.495 "dma_device_type": 1 00:14:50.495 }, 00:14:50.495 { 00:14:50.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.495 "dma_device_type": 2 00:14:50.495 }, 00:14:50.495 { 00:14:50.495 "dma_device_id": "system", 00:14:50.495 "dma_device_type": 1 00:14:50.495 }, 00:14:50.495 { 00:14:50.495 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.495 "dma_device_type": 2 00:14:50.495 } 00:14:50.495 ], 00:14:50.495 "driver_specific": { 00:14:50.495 "raid": { 00:14:50.495 "uuid": "3a050d2f-8ee6-43de-894d-1c87f6bae107", 00:14:50.495 "strip_size_kb": 64, 00:14:50.495 "state": "online", 00:14:50.495 "raid_level": "concat", 00:14:50.495 "superblock": false, 00:14:50.495 "num_base_bdevs": 2, 00:14:50.495 "num_base_bdevs_discovered": 2, 00:14:50.495 "num_base_bdevs_operational": 2, 00:14:50.495 "base_bdevs_list": [ 00:14:50.495 { 00:14:50.495 "name": "BaseBdev1", 00:14:50.495 "uuid": "31f054d5-0967-4740-9fe0-472e1cb94acb", 00:14:50.495 "is_configured": true, 00:14:50.495 "data_offset": 0, 00:14:50.495 "data_size": 65536 00:14:50.495 }, 00:14:50.495 { 00:14:50.495 "name": "BaseBdev2", 00:14:50.495 "uuid": "cefaf859-0fb0-4f36-a3df-4ded61b15bf0", 00:14:50.495 "is_configured": true, 00:14:50.495 "data_offset": 0, 00:14:50.495 "data_size": 65536 00:14:50.495 } 00:14:50.495 ] 00:14:50.495 } 00:14:50.495 } 00:14:50.495 }' 00:14:50.495 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:50.752 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:50.752 BaseBdev2' 00:14:50.752 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:50.752 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:50.752 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:50.752 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:50.752 "name": "BaseBdev1", 00:14:50.752 "aliases": [ 00:14:50.752 "31f054d5-0967-4740-9fe0-472e1cb94acb" 00:14:50.752 ], 00:14:50.752 "product_name": "Malloc disk", 00:14:50.752 "block_size": 512, 00:14:50.752 "num_blocks": 65536, 00:14:50.752 "uuid": "31f054d5-0967-4740-9fe0-472e1cb94acb", 00:14:50.752 "assigned_rate_limits": { 00:14:50.752 "rw_ios_per_sec": 0, 00:14:50.752 "rw_mbytes_per_sec": 0, 00:14:50.752 "r_mbytes_per_sec": 0, 00:14:50.752 "w_mbytes_per_sec": 0 00:14:50.752 }, 00:14:50.752 "claimed": true, 00:14:50.752 "claim_type": "exclusive_write", 00:14:50.752 "zoned": false, 00:14:50.752 "supported_io_types": { 00:14:50.752 "read": true, 00:14:50.752 "write": true, 00:14:50.752 "unmap": true, 00:14:50.752 "write_zeroes": true, 00:14:50.752 "flush": true, 00:14:50.752 "reset": true, 00:14:50.752 "compare": false, 00:14:50.752 "compare_and_write": false, 00:14:50.752 "abort": true, 00:14:50.752 "nvme_admin": false, 00:14:50.752 "nvme_io": false 00:14:50.752 }, 00:14:50.752 "memory_domains": [ 00:14:50.752 { 00:14:50.752 "dma_device_id": "system", 00:14:50.752 "dma_device_type": 1 00:14:50.752 }, 00:14:50.752 { 00:14:50.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:50.752 "dma_device_type": 2 00:14:50.752 } 00:14:50.752 ], 00:14:50.752 "driver_specific": {} 00:14:50.752 }' 00:14:50.752 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.023 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.023 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.023 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.023 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.023 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.023 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.023 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.281 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.281 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.281 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.281 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:51.281 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:51.281 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:51.281 11:56:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:51.538 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:51.538 "name": "BaseBdev2", 00:14:51.538 "aliases": [ 00:14:51.538 "cefaf859-0fb0-4f36-a3df-4ded61b15bf0" 00:14:51.538 ], 00:14:51.538 "product_name": "Malloc disk", 00:14:51.538 "block_size": 512, 00:14:51.538 "num_blocks": 65536, 00:14:51.538 "uuid": "cefaf859-0fb0-4f36-a3df-4ded61b15bf0", 00:14:51.538 "assigned_rate_limits": { 00:14:51.538 "rw_ios_per_sec": 0, 00:14:51.538 "rw_mbytes_per_sec": 0, 00:14:51.538 "r_mbytes_per_sec": 0, 00:14:51.538 "w_mbytes_per_sec": 0 00:14:51.538 }, 00:14:51.538 "claimed": true, 00:14:51.538 "claim_type": "exclusive_write", 00:14:51.538 "zoned": false, 00:14:51.538 "supported_io_types": { 00:14:51.538 "read": true, 00:14:51.538 "write": true, 00:14:51.538 "unmap": true, 00:14:51.538 "write_zeroes": true, 00:14:51.538 "flush": true, 00:14:51.538 "reset": true, 00:14:51.538 "compare": false, 00:14:51.538 "compare_and_write": false, 00:14:51.538 "abort": true, 00:14:51.538 "nvme_admin": false, 00:14:51.538 "nvme_io": false 00:14:51.538 }, 00:14:51.538 "memory_domains": [ 00:14:51.538 { 00:14:51.538 "dma_device_id": "system", 00:14:51.538 "dma_device_type": 1 00:14:51.538 }, 00:14:51.538 { 00:14:51.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:51.538 "dma_device_type": 2 00:14:51.538 } 00:14:51.538 ], 00:14:51.538 "driver_specific": {} 00:14:51.538 }' 00:14:51.538 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.538 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:51.538 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:51.538 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.795 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:51.795 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:51.795 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.795 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:51.795 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:51.795 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:51.795 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:52.053 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:52.053 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:52.310 [2024-07-21 11:56:50.939354] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.310 [2024-07-21 11:56:50.939689] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.310 [2024-07-21 11:56:50.939926] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.310 11:56:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.569 11:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.569 "name": "Existed_Raid", 00:14:52.569 "uuid": "3a050d2f-8ee6-43de-894d-1c87f6bae107", 00:14:52.569 "strip_size_kb": 64, 00:14:52.569 "state": "offline", 00:14:52.569 "raid_level": "concat", 00:14:52.569 "superblock": false, 00:14:52.569 "num_base_bdevs": 2, 00:14:52.569 "num_base_bdevs_discovered": 1, 00:14:52.569 "num_base_bdevs_operational": 1, 00:14:52.569 "base_bdevs_list": [ 00:14:52.569 { 00:14:52.569 "name": null, 00:14:52.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.569 "is_configured": false, 00:14:52.569 "data_offset": 0, 00:14:52.569 "data_size": 65536 00:14:52.569 }, 00:14:52.569 { 00:14:52.569 "name": "BaseBdev2", 00:14:52.569 "uuid": "cefaf859-0fb0-4f36-a3df-4ded61b15bf0", 00:14:52.569 "is_configured": true, 00:14:52.569 "data_offset": 0, 00:14:52.569 "data_size": 65536 00:14:52.569 } 00:14:52.569 ] 00:14:52.569 }' 00:14:52.569 11:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.569 11:56:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.136 11:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:53.136 11:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:53.136 11:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:53.136 11:56:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.394 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:53.394 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:53.394 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:53.652 [2024-07-21 11:56:52.335348] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:53.652 [2024-07-21 11:56:52.335763] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:14:53.652 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:53.652 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:53.652 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:53.652 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 132664 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 132664 ']' 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 132664 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 132664 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 132664' 00:14:53.909 killing process with pid 132664 00:14:53.909 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 132664 00:14:53.909 [2024-07-21 11:56:52.671186] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.910 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 132664 00:14:53.910 [2024-07-21 11:56:52.671464] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.167 11:56:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:54.167 00:14:54.167 real 0m11.224s 00:14:54.167 user 0m20.665s 00:14:54.167 sys 0m1.353s 00:14:54.167 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.168 ************************************ 00:14:54.168 END TEST raid_state_function_test 00:14:54.168 ************************************ 00:14:54.168 11:56:52 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:54.168 11:56:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:14:54.168 11:56:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:54.168 11:56:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.168 ************************************ 00:14:54.168 START TEST raid_state_function_test_sb 00:14:54.168 ************************************ 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 2 true 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=133041 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 133041' 00:14:54.168 Process raid pid: 133041 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 133041 /var/tmp/spdk-raid.sock 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 133041 ']' 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:54.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.168 11:56:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.426 [2024-07-21 11:56:53.048887] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:54.426 [2024-07-21 11:56:53.049399] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.426 [2024-07-21 11:56:53.220893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.686 [2024-07-21 11:56:53.309487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.686 [2024-07-21 11:56:53.362720] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.251 11:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.251 11:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:14:55.251 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:55.509 [2024-07-21 11:56:54.269526] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.509 [2024-07-21 11:56:54.269928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.509 [2024-07-21 11:56:54.270058] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.509 [2024-07-21 11:56:54.270127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.509 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.769 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:55.769 "name": "Existed_Raid", 00:14:55.769 "uuid": "4eb92769-0e32-4c9c-a49a-c179b88585ad", 00:14:55.769 "strip_size_kb": 64, 00:14:55.769 "state": "configuring", 00:14:55.769 "raid_level": "concat", 00:14:55.769 "superblock": true, 00:14:55.769 "num_base_bdevs": 2, 00:14:55.769 "num_base_bdevs_discovered": 0, 00:14:55.769 "num_base_bdevs_operational": 2, 00:14:55.769 "base_bdevs_list": [ 00:14:55.769 { 00:14:55.769 "name": "BaseBdev1", 00:14:55.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.769 "is_configured": false, 00:14:55.769 "data_offset": 0, 00:14:55.769 "data_size": 0 00:14:55.769 }, 00:14:55.769 { 00:14:55.769 "name": "BaseBdev2", 00:14:55.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.769 "is_configured": false, 00:14:55.769 "data_offset": 0, 00:14:55.769 "data_size": 0 00:14:55.769 } 00:14:55.769 ] 00:14:55.769 }' 00:14:55.769 11:56:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:55.769 11:56:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.367 11:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:56.624 [2024-07-21 11:56:55.329693] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.624 [2024-07-21 11:56:55.329931] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:14:56.624 11:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:56.882 [2024-07-21 11:56:55.613715] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.882 [2024-07-21 11:56:55.614160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.882 [2024-07-21 11:56:55.614289] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.882 [2024-07-21 11:56:55.614366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.882 11:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:57.140 [2024-07-21 11:56:55.853223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:57.140 BaseBdev1 00:14:57.140 11:56:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:57.140 11:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:14:57.140 11:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:57.140 11:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:14:57.140 11:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:57.140 11:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:57.140 11:56:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:57.398 11:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:57.657 [ 00:14:57.657 { 00:14:57.657 "name": "BaseBdev1", 00:14:57.657 "aliases": [ 00:14:57.657 "bf938797-449e-4f55-8efb-3fbe26179283" 00:14:57.657 ], 00:14:57.657 "product_name": "Malloc disk", 00:14:57.657 "block_size": 512, 00:14:57.657 "num_blocks": 65536, 00:14:57.657 "uuid": "bf938797-449e-4f55-8efb-3fbe26179283", 00:14:57.657 "assigned_rate_limits": { 00:14:57.657 "rw_ios_per_sec": 0, 00:14:57.657 "rw_mbytes_per_sec": 0, 00:14:57.657 "r_mbytes_per_sec": 0, 00:14:57.657 "w_mbytes_per_sec": 0 00:14:57.657 }, 00:14:57.657 "claimed": true, 00:14:57.657 "claim_type": "exclusive_write", 00:14:57.657 "zoned": false, 00:14:57.657 "supported_io_types": { 00:14:57.657 "read": true, 00:14:57.657 "write": true, 00:14:57.657 "unmap": true, 00:14:57.657 "write_zeroes": true, 00:14:57.657 "flush": true, 00:14:57.657 "reset": true, 00:14:57.657 "compare": false, 00:14:57.657 "compare_and_write": false, 00:14:57.657 "abort": true, 00:14:57.657 "nvme_admin": false, 00:14:57.657 "nvme_io": false 00:14:57.657 }, 00:14:57.657 "memory_domains": [ 00:14:57.657 { 00:14:57.657 "dma_device_id": "system", 00:14:57.657 "dma_device_type": 1 00:14:57.657 }, 00:14:57.657 { 00:14:57.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.657 "dma_device_type": 2 00:14:57.657 } 00:14:57.657 ], 00:14:57.657 "driver_specific": {} 00:14:57.657 } 00:14:57.657 ] 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.657 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.916 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.916 "name": "Existed_Raid", 00:14:57.916 "uuid": "1fd8f85f-4fc2-4e5e-99ac-da8712fa2b7b", 00:14:57.916 "strip_size_kb": 64, 00:14:57.916 "state": "configuring", 00:14:57.916 "raid_level": "concat", 00:14:57.916 "superblock": true, 00:14:57.916 "num_base_bdevs": 2, 00:14:57.916 "num_base_bdevs_discovered": 1, 00:14:57.916 "num_base_bdevs_operational": 2, 00:14:57.916 "base_bdevs_list": [ 00:14:57.916 { 00:14:57.916 "name": "BaseBdev1", 00:14:57.916 "uuid": "bf938797-449e-4f55-8efb-3fbe26179283", 00:14:57.916 "is_configured": true, 00:14:57.916 "data_offset": 2048, 00:14:57.916 "data_size": 63488 00:14:57.916 }, 00:14:57.916 { 00:14:57.916 "name": "BaseBdev2", 00:14:57.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.916 "is_configured": false, 00:14:57.916 "data_offset": 0, 00:14:57.916 "data_size": 0 00:14:57.916 } 00:14:57.916 ] 00:14:57.916 }' 00:14:57.916 11:56:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.916 11:56:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.482 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:58.740 [2024-07-21 11:56:57.449706] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:58.740 [2024-07-21 11:56:57.450125] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:58.740 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:58.998 [2024-07-21 11:56:57.721797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.998 [2024-07-21 11:56:57.724219] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:58.998 [2024-07-21 11:56:57.724455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.998 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.257 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.257 "name": "Existed_Raid", 00:14:59.257 "uuid": "74dd532e-ca0d-428f-989f-100a69c397bc", 00:14:59.257 "strip_size_kb": 64, 00:14:59.257 "state": "configuring", 00:14:59.257 "raid_level": "concat", 00:14:59.257 "superblock": true, 00:14:59.257 "num_base_bdevs": 2, 00:14:59.257 "num_base_bdevs_discovered": 1, 00:14:59.257 "num_base_bdevs_operational": 2, 00:14:59.257 "base_bdevs_list": [ 00:14:59.257 { 00:14:59.257 "name": "BaseBdev1", 00:14:59.257 "uuid": "bf938797-449e-4f55-8efb-3fbe26179283", 00:14:59.257 "is_configured": true, 00:14:59.257 "data_offset": 2048, 00:14:59.257 "data_size": 63488 00:14:59.257 }, 00:14:59.257 { 00:14:59.257 "name": "BaseBdev2", 00:14:59.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.257 "is_configured": false, 00:14:59.257 "data_offset": 0, 00:14:59.257 "data_size": 0 00:14:59.257 } 00:14:59.257 ] 00:14:59.257 }' 00:14:59.257 11:56:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.257 11:56:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.824 11:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:00.082 [2024-07-21 11:56:58.882734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.083 [2024-07-21 11:56:58.883387] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:00.083 [2024-07-21 11:56:58.883583] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:00.083 [2024-07-21 11:56:58.883842] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:00.083 [2024-07-21 11:56:58.884402] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:00.083 BaseBdev2 00:15:00.083 [2024-07-21 11:56:58.884592] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:00.083 [2024-07-21 11:56:58.884925] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.083 11:56:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:00.083 11:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:00.083 11:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:00.083 11:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:00.083 11:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:00.083 11:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:00.083 11:56:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:00.341 11:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:00.599 [ 00:15:00.599 { 00:15:00.599 "name": "BaseBdev2", 00:15:00.599 "aliases": [ 00:15:00.599 "373c8e82-938f-42b5-8eb1-d3ac52dbbcfc" 00:15:00.599 ], 00:15:00.599 "product_name": "Malloc disk", 00:15:00.599 "block_size": 512, 00:15:00.599 "num_blocks": 65536, 00:15:00.599 "uuid": "373c8e82-938f-42b5-8eb1-d3ac52dbbcfc", 00:15:00.599 "assigned_rate_limits": { 00:15:00.599 "rw_ios_per_sec": 0, 00:15:00.599 "rw_mbytes_per_sec": 0, 00:15:00.599 "r_mbytes_per_sec": 0, 00:15:00.599 "w_mbytes_per_sec": 0 00:15:00.599 }, 00:15:00.599 "claimed": true, 00:15:00.599 "claim_type": "exclusive_write", 00:15:00.599 "zoned": false, 00:15:00.599 "supported_io_types": { 00:15:00.599 "read": true, 00:15:00.599 "write": true, 00:15:00.599 "unmap": true, 00:15:00.599 "write_zeroes": true, 00:15:00.599 "flush": true, 00:15:00.599 "reset": true, 00:15:00.599 "compare": false, 00:15:00.599 "compare_and_write": false, 00:15:00.599 "abort": true, 00:15:00.599 "nvme_admin": false, 00:15:00.599 "nvme_io": false 00:15:00.599 }, 00:15:00.599 "memory_domains": [ 00:15:00.599 { 00:15:00.599 "dma_device_id": "system", 00:15:00.599 "dma_device_type": 1 00:15:00.599 }, 00:15:00.599 { 00:15:00.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.599 "dma_device_type": 2 00:15:00.599 } 00:15:00.599 ], 00:15:00.599 "driver_specific": {} 00:15:00.599 } 00:15:00.599 ] 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.599 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.858 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.858 "name": "Existed_Raid", 00:15:00.858 "uuid": "74dd532e-ca0d-428f-989f-100a69c397bc", 00:15:00.858 "strip_size_kb": 64, 00:15:00.858 "state": "online", 00:15:00.858 "raid_level": "concat", 00:15:00.858 "superblock": true, 00:15:00.858 "num_base_bdevs": 2, 00:15:00.858 "num_base_bdevs_discovered": 2, 00:15:00.858 "num_base_bdevs_operational": 2, 00:15:00.858 "base_bdevs_list": [ 00:15:00.858 { 00:15:00.858 "name": "BaseBdev1", 00:15:00.858 "uuid": "bf938797-449e-4f55-8efb-3fbe26179283", 00:15:00.858 "is_configured": true, 00:15:00.858 "data_offset": 2048, 00:15:00.858 "data_size": 63488 00:15:00.858 }, 00:15:00.858 { 00:15:00.858 "name": "BaseBdev2", 00:15:00.858 "uuid": "373c8e82-938f-42b5-8eb1-d3ac52dbbcfc", 00:15:00.858 "is_configured": true, 00:15:00.858 "data_offset": 2048, 00:15:00.858 "data_size": 63488 00:15:00.858 } 00:15:00.858 ] 00:15:00.858 }' 00:15:00.858 11:56:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.858 11:56:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.792 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.792 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:01.793 [2024-07-21 11:57:00.579710] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:01.793 "name": "Existed_Raid", 00:15:01.793 "aliases": [ 00:15:01.793 "74dd532e-ca0d-428f-989f-100a69c397bc" 00:15:01.793 ], 00:15:01.793 "product_name": "Raid Volume", 00:15:01.793 "block_size": 512, 00:15:01.793 "num_blocks": 126976, 00:15:01.793 "uuid": "74dd532e-ca0d-428f-989f-100a69c397bc", 00:15:01.793 "assigned_rate_limits": { 00:15:01.793 "rw_ios_per_sec": 0, 00:15:01.793 "rw_mbytes_per_sec": 0, 00:15:01.793 "r_mbytes_per_sec": 0, 00:15:01.793 "w_mbytes_per_sec": 0 00:15:01.793 }, 00:15:01.793 "claimed": false, 00:15:01.793 "zoned": false, 00:15:01.793 "supported_io_types": { 00:15:01.793 "read": true, 00:15:01.793 "write": true, 00:15:01.793 "unmap": true, 00:15:01.793 "write_zeroes": true, 00:15:01.793 "flush": true, 00:15:01.793 "reset": true, 00:15:01.793 "compare": false, 00:15:01.793 "compare_and_write": false, 00:15:01.793 "abort": false, 00:15:01.793 "nvme_admin": false, 00:15:01.793 "nvme_io": false 00:15:01.793 }, 00:15:01.793 "memory_domains": [ 00:15:01.793 { 00:15:01.793 "dma_device_id": "system", 00:15:01.793 "dma_device_type": 1 00:15:01.793 }, 00:15:01.793 { 00:15:01.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.793 "dma_device_type": 2 00:15:01.793 }, 00:15:01.793 { 00:15:01.793 "dma_device_id": "system", 00:15:01.793 "dma_device_type": 1 00:15:01.793 }, 00:15:01.793 { 00:15:01.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.793 "dma_device_type": 2 00:15:01.793 } 00:15:01.793 ], 00:15:01.793 "driver_specific": { 00:15:01.793 "raid": { 00:15:01.793 "uuid": "74dd532e-ca0d-428f-989f-100a69c397bc", 00:15:01.793 "strip_size_kb": 64, 00:15:01.793 "state": "online", 00:15:01.793 "raid_level": "concat", 00:15:01.793 "superblock": true, 00:15:01.793 "num_base_bdevs": 2, 00:15:01.793 "num_base_bdevs_discovered": 2, 00:15:01.793 "num_base_bdevs_operational": 2, 00:15:01.793 "base_bdevs_list": [ 00:15:01.793 { 00:15:01.793 "name": "BaseBdev1", 00:15:01.793 "uuid": "bf938797-449e-4f55-8efb-3fbe26179283", 00:15:01.793 "is_configured": true, 00:15:01.793 "data_offset": 2048, 00:15:01.793 "data_size": 63488 00:15:01.793 }, 00:15:01.793 { 00:15:01.793 "name": "BaseBdev2", 00:15:01.793 "uuid": "373c8e82-938f-42b5-8eb1-d3ac52dbbcfc", 00:15:01.793 "is_configured": true, 00:15:01.793 "data_offset": 2048, 00:15:01.793 "data_size": 63488 00:15:01.793 } 00:15:01.793 ] 00:15:01.793 } 00:15:01.793 } 00:15:01.793 }' 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:01.793 BaseBdev2' 00:15:01.793 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.051 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:02.051 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.051 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.051 "name": "BaseBdev1", 00:15:02.051 "aliases": [ 00:15:02.051 "bf938797-449e-4f55-8efb-3fbe26179283" 00:15:02.051 ], 00:15:02.051 "product_name": "Malloc disk", 00:15:02.051 "block_size": 512, 00:15:02.051 "num_blocks": 65536, 00:15:02.051 "uuid": "bf938797-449e-4f55-8efb-3fbe26179283", 00:15:02.051 "assigned_rate_limits": { 00:15:02.051 "rw_ios_per_sec": 0, 00:15:02.051 "rw_mbytes_per_sec": 0, 00:15:02.051 "r_mbytes_per_sec": 0, 00:15:02.051 "w_mbytes_per_sec": 0 00:15:02.051 }, 00:15:02.051 "claimed": true, 00:15:02.051 "claim_type": "exclusive_write", 00:15:02.051 "zoned": false, 00:15:02.051 "supported_io_types": { 00:15:02.051 "read": true, 00:15:02.051 "write": true, 00:15:02.051 "unmap": true, 00:15:02.051 "write_zeroes": true, 00:15:02.051 "flush": true, 00:15:02.051 "reset": true, 00:15:02.051 "compare": false, 00:15:02.051 "compare_and_write": false, 00:15:02.051 "abort": true, 00:15:02.051 "nvme_admin": false, 00:15:02.051 "nvme_io": false 00:15:02.051 }, 00:15:02.051 "memory_domains": [ 00:15:02.051 { 00:15:02.051 "dma_device_id": "system", 00:15:02.051 "dma_device_type": 1 00:15:02.051 }, 00:15:02.051 { 00:15:02.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.051 "dma_device_type": 2 00:15:02.051 } 00:15:02.051 ], 00:15:02.051 "driver_specific": {} 00:15:02.051 }' 00:15:02.051 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.309 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.309 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.309 11:57:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.309 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.309 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.309 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.309 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.309 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.567 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.567 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.567 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.567 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.567 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:02.567 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.826 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.826 "name": "BaseBdev2", 00:15:02.826 "aliases": [ 00:15:02.826 "373c8e82-938f-42b5-8eb1-d3ac52dbbcfc" 00:15:02.826 ], 00:15:02.826 "product_name": "Malloc disk", 00:15:02.826 "block_size": 512, 00:15:02.826 "num_blocks": 65536, 00:15:02.826 "uuid": "373c8e82-938f-42b5-8eb1-d3ac52dbbcfc", 00:15:02.826 "assigned_rate_limits": { 00:15:02.826 "rw_ios_per_sec": 0, 00:15:02.826 "rw_mbytes_per_sec": 0, 00:15:02.826 "r_mbytes_per_sec": 0, 00:15:02.826 "w_mbytes_per_sec": 0 00:15:02.826 }, 00:15:02.826 "claimed": true, 00:15:02.826 "claim_type": "exclusive_write", 00:15:02.826 "zoned": false, 00:15:02.826 "supported_io_types": { 00:15:02.826 "read": true, 00:15:02.826 "write": true, 00:15:02.826 "unmap": true, 00:15:02.826 "write_zeroes": true, 00:15:02.826 "flush": true, 00:15:02.826 "reset": true, 00:15:02.826 "compare": false, 00:15:02.826 "compare_and_write": false, 00:15:02.826 "abort": true, 00:15:02.826 "nvme_admin": false, 00:15:02.826 "nvme_io": false 00:15:02.826 }, 00:15:02.826 "memory_domains": [ 00:15:02.826 { 00:15:02.826 "dma_device_id": "system", 00:15:02.826 "dma_device_type": 1 00:15:02.826 }, 00:15:02.826 { 00:15:02.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.826 "dma_device_type": 2 00:15:02.826 } 00:15:02.826 ], 00:15:02.826 "driver_specific": {} 00:15:02.826 }' 00:15:02.826 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.826 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.826 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.826 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.084 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:03.084 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:03.084 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.084 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:03.084 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:03.084 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.084 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:03.343 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:03.343 11:57:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:03.611 [2024-07-21 11:57:02.219945] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.611 [2024-07-21 11:57:02.220328] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.611 [2024-07-21 11:57:02.220529] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.611 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.869 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:03.869 "name": "Existed_Raid", 00:15:03.869 "uuid": "74dd532e-ca0d-428f-989f-100a69c397bc", 00:15:03.869 "strip_size_kb": 64, 00:15:03.869 "state": "offline", 00:15:03.869 "raid_level": "concat", 00:15:03.869 "superblock": true, 00:15:03.869 "num_base_bdevs": 2, 00:15:03.869 "num_base_bdevs_discovered": 1, 00:15:03.869 "num_base_bdevs_operational": 1, 00:15:03.869 "base_bdevs_list": [ 00:15:03.869 { 00:15:03.869 "name": null, 00:15:03.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.869 "is_configured": false, 00:15:03.869 "data_offset": 2048, 00:15:03.869 "data_size": 63488 00:15:03.869 }, 00:15:03.869 { 00:15:03.869 "name": "BaseBdev2", 00:15:03.869 "uuid": "373c8e82-938f-42b5-8eb1-d3ac52dbbcfc", 00:15:03.869 "is_configured": true, 00:15:03.869 "data_offset": 2048, 00:15:03.869 "data_size": 63488 00:15:03.869 } 00:15:03.869 ] 00:15:03.869 }' 00:15:03.869 11:57:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:03.869 11:57:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.435 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:04.435 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:04.435 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.435 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:04.692 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:04.693 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:04.693 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:04.950 [2024-07-21 11:57:03.668218] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:04.950 [2024-07-21 11:57:03.669910] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:04.950 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:04.950 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:04.950 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.950 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 133041 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 133041 ']' 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 133041 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133041 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133041' 00:15:05.207 killing process with pid 133041 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 133041 00:15:05.207 [2024-07-21 11:57:03.961216] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.207 11:57:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 133041 00:15:05.207 [2024-07-21 11:57:03.961480] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.465 11:57:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:05.465 00:15:05.465 real 0m11.239s 00:15:05.465 user 0m20.533s 00:15:05.465 sys 0m1.594s 00:15:05.465 11:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:05.465 ************************************ 00:15:05.465 END TEST raid_state_function_test_sb 00:15:05.465 ************************************ 00:15:05.465 11:57:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.465 11:57:04 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:15:05.465 11:57:04 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:05.465 11:57:04 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:05.465 11:57:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.465 ************************************ 00:15:05.465 START TEST raid_superblock_test 00:15:05.465 ************************************ 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 2 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=133411 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 133411 /var/tmp/spdk-raid.sock 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 133411 ']' 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:05.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:05.465 11:57:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.723 [2024-07-21 11:57:04.342029] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:05.723 [2024-07-21 11:57:04.342259] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133411 ] 00:15:05.723 [2024-07-21 11:57:04.502198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.981 [2024-07-21 11:57:04.597859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.981 [2024-07-21 11:57:04.654525] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.546 11:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:06.546 11:57:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:15:06.546 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:06.546 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:06.546 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:06.546 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:06.547 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:06.547 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:06.547 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:06.547 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:06.547 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:06.804 malloc1 00:15:06.804 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:07.062 [2024-07-21 11:57:05.843390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:07.062 [2024-07-21 11:57:05.843875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.062 [2024-07-21 11:57:05.843976] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:07.062 [2024-07-21 11:57:05.844243] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.062 [2024-07-21 11:57:05.847016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.062 [2024-07-21 11:57:05.847225] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:07.062 pt1 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:07.062 11:57:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:07.319 malloc2 00:15:07.319 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:07.577 [2024-07-21 11:57:06.334499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:07.577 [2024-07-21 11:57:06.334962] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.577 [2024-07-21 11:57:06.335093] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:07.577 [2024-07-21 11:57:06.335341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.577 [2024-07-21 11:57:06.337809] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.577 [2024-07-21 11:57:06.338008] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:07.577 pt2 00:15:07.577 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:07.577 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:07.577 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:15:07.850 [2024-07-21 11:57:06.566700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:07.850 [2024-07-21 11:57:06.569228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:07.850 [2024-07-21 11:57:06.569593] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:07.850 [2024-07-21 11:57:06.569725] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:07.850 [2024-07-21 11:57:06.569940] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:07.850 [2024-07-21 11:57:06.570406] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:07.850 [2024-07-21 11:57:06.570524] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:07.850 [2024-07-21 11:57:06.570891] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.850 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.130 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:08.130 "name": "raid_bdev1", 00:15:08.130 "uuid": "95bd6321-c260-476f-ad2d-1ba63e188c88", 00:15:08.130 "strip_size_kb": 64, 00:15:08.130 "state": "online", 00:15:08.130 "raid_level": "concat", 00:15:08.130 "superblock": true, 00:15:08.130 "num_base_bdevs": 2, 00:15:08.130 "num_base_bdevs_discovered": 2, 00:15:08.130 "num_base_bdevs_operational": 2, 00:15:08.130 "base_bdevs_list": [ 00:15:08.130 { 00:15:08.130 "name": "pt1", 00:15:08.131 "uuid": "e8ac5805-ba1b-556f-8360-6def399031de", 00:15:08.131 "is_configured": true, 00:15:08.131 "data_offset": 2048, 00:15:08.131 "data_size": 63488 00:15:08.131 }, 00:15:08.131 { 00:15:08.131 "name": "pt2", 00:15:08.131 "uuid": "a54fb879-1b58-58b6-a530-59945e1e14c9", 00:15:08.131 "is_configured": true, 00:15:08.131 "data_offset": 2048, 00:15:08.131 "data_size": 63488 00:15:08.131 } 00:15:08.131 ] 00:15:08.131 }' 00:15:08.131 11:57:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:08.131 11:57:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:08.696 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:08.955 [2024-07-21 11:57:07.755541] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.955 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:08.955 "name": "raid_bdev1", 00:15:08.955 "aliases": [ 00:15:08.955 "95bd6321-c260-476f-ad2d-1ba63e188c88" 00:15:08.955 ], 00:15:08.955 "product_name": "Raid Volume", 00:15:08.955 "block_size": 512, 00:15:08.955 "num_blocks": 126976, 00:15:08.955 "uuid": "95bd6321-c260-476f-ad2d-1ba63e188c88", 00:15:08.955 "assigned_rate_limits": { 00:15:08.955 "rw_ios_per_sec": 0, 00:15:08.955 "rw_mbytes_per_sec": 0, 00:15:08.955 "r_mbytes_per_sec": 0, 00:15:08.955 "w_mbytes_per_sec": 0 00:15:08.955 }, 00:15:08.955 "claimed": false, 00:15:08.955 "zoned": false, 00:15:08.955 "supported_io_types": { 00:15:08.955 "read": true, 00:15:08.955 "write": true, 00:15:08.955 "unmap": true, 00:15:08.955 "write_zeroes": true, 00:15:08.955 "flush": true, 00:15:08.955 "reset": true, 00:15:08.955 "compare": false, 00:15:08.955 "compare_and_write": false, 00:15:08.955 "abort": false, 00:15:08.955 "nvme_admin": false, 00:15:08.955 "nvme_io": false 00:15:08.955 }, 00:15:08.955 "memory_domains": [ 00:15:08.955 { 00:15:08.955 "dma_device_id": "system", 00:15:08.955 "dma_device_type": 1 00:15:08.955 }, 00:15:08.955 { 00:15:08.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.955 "dma_device_type": 2 00:15:08.955 }, 00:15:08.955 { 00:15:08.955 "dma_device_id": "system", 00:15:08.955 "dma_device_type": 1 00:15:08.955 }, 00:15:08.955 { 00:15:08.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.955 "dma_device_type": 2 00:15:08.955 } 00:15:08.955 ], 00:15:08.955 "driver_specific": { 00:15:08.955 "raid": { 00:15:08.955 "uuid": "95bd6321-c260-476f-ad2d-1ba63e188c88", 00:15:08.955 "strip_size_kb": 64, 00:15:08.955 "state": "online", 00:15:08.955 "raid_level": "concat", 00:15:08.955 "superblock": true, 00:15:08.955 "num_base_bdevs": 2, 00:15:08.955 "num_base_bdevs_discovered": 2, 00:15:08.955 "num_base_bdevs_operational": 2, 00:15:08.955 "base_bdevs_list": [ 00:15:08.955 { 00:15:08.955 "name": "pt1", 00:15:08.955 "uuid": "e8ac5805-ba1b-556f-8360-6def399031de", 00:15:08.955 "is_configured": true, 00:15:08.955 "data_offset": 2048, 00:15:08.955 "data_size": 63488 00:15:08.955 }, 00:15:08.955 { 00:15:08.955 "name": "pt2", 00:15:08.955 "uuid": "a54fb879-1b58-58b6-a530-59945e1e14c9", 00:15:08.955 "is_configured": true, 00:15:08.955 "data_offset": 2048, 00:15:08.955 "data_size": 63488 00:15:08.955 } 00:15:08.955 ] 00:15:08.955 } 00:15:08.955 } 00:15:08.955 }' 00:15:08.956 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:09.215 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:09.215 pt2' 00:15:09.215 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:09.215 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:09.215 11:57:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:09.215 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:09.215 "name": "pt1", 00:15:09.215 "aliases": [ 00:15:09.215 "e8ac5805-ba1b-556f-8360-6def399031de" 00:15:09.215 ], 00:15:09.215 "product_name": "passthru", 00:15:09.215 "block_size": 512, 00:15:09.215 "num_blocks": 65536, 00:15:09.215 "uuid": "e8ac5805-ba1b-556f-8360-6def399031de", 00:15:09.215 "assigned_rate_limits": { 00:15:09.215 "rw_ios_per_sec": 0, 00:15:09.215 "rw_mbytes_per_sec": 0, 00:15:09.215 "r_mbytes_per_sec": 0, 00:15:09.215 "w_mbytes_per_sec": 0 00:15:09.215 }, 00:15:09.215 "claimed": true, 00:15:09.215 "claim_type": "exclusive_write", 00:15:09.215 "zoned": false, 00:15:09.215 "supported_io_types": { 00:15:09.215 "read": true, 00:15:09.215 "write": true, 00:15:09.215 "unmap": true, 00:15:09.215 "write_zeroes": true, 00:15:09.215 "flush": true, 00:15:09.215 "reset": true, 00:15:09.215 "compare": false, 00:15:09.215 "compare_and_write": false, 00:15:09.215 "abort": true, 00:15:09.215 "nvme_admin": false, 00:15:09.215 "nvme_io": false 00:15:09.215 }, 00:15:09.215 "memory_domains": [ 00:15:09.215 { 00:15:09.215 "dma_device_id": "system", 00:15:09.215 "dma_device_type": 1 00:15:09.215 }, 00:15:09.215 { 00:15:09.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.215 "dma_device_type": 2 00:15:09.215 } 00:15:09.215 ], 00:15:09.215 "driver_specific": { 00:15:09.215 "passthru": { 00:15:09.215 "name": "pt1", 00:15:09.215 "base_bdev_name": "malloc1" 00:15:09.215 } 00:15:09.215 } 00:15:09.215 }' 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.473 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.731 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:09.731 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.731 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.731 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:09.731 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:09.731 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:09.731 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:09.990 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:09.990 "name": "pt2", 00:15:09.990 "aliases": [ 00:15:09.990 "a54fb879-1b58-58b6-a530-59945e1e14c9" 00:15:09.990 ], 00:15:09.990 "product_name": "passthru", 00:15:09.990 "block_size": 512, 00:15:09.990 "num_blocks": 65536, 00:15:09.990 "uuid": "a54fb879-1b58-58b6-a530-59945e1e14c9", 00:15:09.990 "assigned_rate_limits": { 00:15:09.990 "rw_ios_per_sec": 0, 00:15:09.990 "rw_mbytes_per_sec": 0, 00:15:09.990 "r_mbytes_per_sec": 0, 00:15:09.990 "w_mbytes_per_sec": 0 00:15:09.990 }, 00:15:09.990 "claimed": true, 00:15:09.990 "claim_type": "exclusive_write", 00:15:09.990 "zoned": false, 00:15:09.990 "supported_io_types": { 00:15:09.990 "read": true, 00:15:09.990 "write": true, 00:15:09.990 "unmap": true, 00:15:09.990 "write_zeroes": true, 00:15:09.990 "flush": true, 00:15:09.990 "reset": true, 00:15:09.990 "compare": false, 00:15:09.990 "compare_and_write": false, 00:15:09.990 "abort": true, 00:15:09.990 "nvme_admin": false, 00:15:09.990 "nvme_io": false 00:15:09.990 }, 00:15:09.990 "memory_domains": [ 00:15:09.990 { 00:15:09.990 "dma_device_id": "system", 00:15:09.990 "dma_device_type": 1 00:15:09.990 }, 00:15:09.990 { 00:15:09.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.990 "dma_device_type": 2 00:15:09.990 } 00:15:09.990 ], 00:15:09.990 "driver_specific": { 00:15:09.990 "passthru": { 00:15:09.990 "name": "pt2", 00:15:09.990 "base_bdev_name": "malloc2" 00:15:09.990 } 00:15:09.990 } 00:15:09.990 }' 00:15:09.990 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.990 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.990 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:09.990 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.990 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:10.248 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:10.248 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.248 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:10.248 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:10.248 11:57:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.248 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:10.248 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:10.248 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:10.248 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:10.506 [2024-07-21 11:57:09.260407] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:10.506 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=95bd6321-c260-476f-ad2d-1ba63e188c88 00:15:10.506 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 95bd6321-c260-476f-ad2d-1ba63e188c88 ']' 00:15:10.506 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:10.764 [2024-07-21 11:57:09.536265] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:10.764 [2024-07-21 11:57:09.536466] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.764 [2024-07-21 11:57:09.536723] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.764 [2024-07-21 11:57:09.536905] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.764 [2024-07-21 11:57:09.537026] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:10.764 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.764 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:11.022 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:11.022 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:11.022 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.022 11:57:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:11.280 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:11.280 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:11.539 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:11.539 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:11.797 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:15:12.055 [2024-07-21 11:57:10.776527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:12.055 [2024-07-21 11:57:10.778825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:12.055 [2024-07-21 11:57:10.778921] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:12.055 [2024-07-21 11:57:10.779006] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:12.055 [2024-07-21 11:57:10.779065] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.055 [2024-07-21 11:57:10.779077] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:12.055 request: 00:15:12.055 { 00:15:12.055 "name": "raid_bdev1", 00:15:12.055 "raid_level": "concat", 00:15:12.055 "base_bdevs": [ 00:15:12.055 "malloc1", 00:15:12.055 "malloc2" 00:15:12.055 ], 00:15:12.055 "superblock": false, 00:15:12.055 "strip_size_kb": 64, 00:15:12.055 "method": "bdev_raid_create", 00:15:12.055 "req_id": 1 00:15:12.055 } 00:15:12.055 Got JSON-RPC error response 00:15:12.055 response: 00:15:12.055 { 00:15:12.055 "code": -17, 00:15:12.055 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:12.055 } 00:15:12.055 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:12.055 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:12.055 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:12.055 11:57:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:12.055 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.055 11:57:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:12.313 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:12.313 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:12.313 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:12.570 [2024-07-21 11:57:11.280554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:12.570 [2024-07-21 11:57:11.280700] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.570 [2024-07-21 11:57:11.280741] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:12.570 [2024-07-21 11:57:11.280770] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.570 [2024-07-21 11:57:11.283458] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.570 [2024-07-21 11:57:11.283529] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:12.570 [2024-07-21 11:57:11.283621] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:12.570 [2024-07-21 11:57:11.283698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:12.570 pt1 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.570 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.828 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.828 "name": "raid_bdev1", 00:15:12.828 "uuid": "95bd6321-c260-476f-ad2d-1ba63e188c88", 00:15:12.828 "strip_size_kb": 64, 00:15:12.828 "state": "configuring", 00:15:12.828 "raid_level": "concat", 00:15:12.828 "superblock": true, 00:15:12.828 "num_base_bdevs": 2, 00:15:12.828 "num_base_bdevs_discovered": 1, 00:15:12.828 "num_base_bdevs_operational": 2, 00:15:12.828 "base_bdevs_list": [ 00:15:12.828 { 00:15:12.828 "name": "pt1", 00:15:12.828 "uuid": "e8ac5805-ba1b-556f-8360-6def399031de", 00:15:12.828 "is_configured": true, 00:15:12.828 "data_offset": 2048, 00:15:12.828 "data_size": 63488 00:15:12.828 }, 00:15:12.828 { 00:15:12.828 "name": null, 00:15:12.828 "uuid": "a54fb879-1b58-58b6-a530-59945e1e14c9", 00:15:12.828 "is_configured": false, 00:15:12.828 "data_offset": 2048, 00:15:12.828 "data_size": 63488 00:15:12.828 } 00:15:12.828 ] 00:15:12.828 }' 00:15:12.828 11:57:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.828 11:57:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.394 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:15:13.394 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:13.394 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:13.394 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.652 [2024-07-21 11:57:12.452813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.652 [2024-07-21 11:57:12.452980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.652 [2024-07-21 11:57:12.453019] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:13.652 [2024-07-21 11:57:12.453047] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.652 [2024-07-21 11:57:12.453608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.652 [2024-07-21 11:57:12.453667] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.652 [2024-07-21 11:57:12.453779] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:13.652 [2024-07-21 11:57:12.453824] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.652 [2024-07-21 11:57:12.453961] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:15:13.652 [2024-07-21 11:57:12.453975] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:13.652 [2024-07-21 11:57:12.454051] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:15:13.652 [2024-07-21 11:57:12.454406] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:15:13.652 [2024-07-21 11:57:12.454432] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:15:13.652 [2024-07-21 11:57:12.454545] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.652 pt2 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.652 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.910 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.910 "name": "raid_bdev1", 00:15:13.910 "uuid": "95bd6321-c260-476f-ad2d-1ba63e188c88", 00:15:13.910 "strip_size_kb": 64, 00:15:13.910 "state": "online", 00:15:13.910 "raid_level": "concat", 00:15:13.910 "superblock": true, 00:15:13.910 "num_base_bdevs": 2, 00:15:13.910 "num_base_bdevs_discovered": 2, 00:15:13.910 "num_base_bdevs_operational": 2, 00:15:13.910 "base_bdevs_list": [ 00:15:13.910 { 00:15:13.910 "name": "pt1", 00:15:13.910 "uuid": "e8ac5805-ba1b-556f-8360-6def399031de", 00:15:13.910 "is_configured": true, 00:15:13.910 "data_offset": 2048, 00:15:13.910 "data_size": 63488 00:15:13.910 }, 00:15:13.910 { 00:15:13.910 "name": "pt2", 00:15:13.910 "uuid": "a54fb879-1b58-58b6-a530-59945e1e14c9", 00:15:13.910 "is_configured": true, 00:15:13.910 "data_offset": 2048, 00:15:13.910 "data_size": 63488 00:15:13.910 } 00:15:13.910 ] 00:15:13.910 }' 00:15:13.910 11:57:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.910 11:57:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:14.844 [2024-07-21 11:57:13.621769] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:14.844 "name": "raid_bdev1", 00:15:14.844 "aliases": [ 00:15:14.844 "95bd6321-c260-476f-ad2d-1ba63e188c88" 00:15:14.844 ], 00:15:14.844 "product_name": "Raid Volume", 00:15:14.844 "block_size": 512, 00:15:14.844 "num_blocks": 126976, 00:15:14.844 "uuid": "95bd6321-c260-476f-ad2d-1ba63e188c88", 00:15:14.844 "assigned_rate_limits": { 00:15:14.844 "rw_ios_per_sec": 0, 00:15:14.844 "rw_mbytes_per_sec": 0, 00:15:14.844 "r_mbytes_per_sec": 0, 00:15:14.844 "w_mbytes_per_sec": 0 00:15:14.844 }, 00:15:14.844 "claimed": false, 00:15:14.844 "zoned": false, 00:15:14.844 "supported_io_types": { 00:15:14.844 "read": true, 00:15:14.844 "write": true, 00:15:14.844 "unmap": true, 00:15:14.844 "write_zeroes": true, 00:15:14.844 "flush": true, 00:15:14.844 "reset": true, 00:15:14.844 "compare": false, 00:15:14.844 "compare_and_write": false, 00:15:14.844 "abort": false, 00:15:14.844 "nvme_admin": false, 00:15:14.844 "nvme_io": false 00:15:14.844 }, 00:15:14.844 "memory_domains": [ 00:15:14.844 { 00:15:14.844 "dma_device_id": "system", 00:15:14.844 "dma_device_type": 1 00:15:14.844 }, 00:15:14.844 { 00:15:14.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.844 "dma_device_type": 2 00:15:14.844 }, 00:15:14.844 { 00:15:14.844 "dma_device_id": "system", 00:15:14.844 "dma_device_type": 1 00:15:14.844 }, 00:15:14.844 { 00:15:14.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.844 "dma_device_type": 2 00:15:14.844 } 00:15:14.844 ], 00:15:14.844 "driver_specific": { 00:15:14.844 "raid": { 00:15:14.844 "uuid": "95bd6321-c260-476f-ad2d-1ba63e188c88", 00:15:14.844 "strip_size_kb": 64, 00:15:14.844 "state": "online", 00:15:14.844 "raid_level": "concat", 00:15:14.844 "superblock": true, 00:15:14.844 "num_base_bdevs": 2, 00:15:14.844 "num_base_bdevs_discovered": 2, 00:15:14.844 "num_base_bdevs_operational": 2, 00:15:14.844 "base_bdevs_list": [ 00:15:14.844 { 00:15:14.844 "name": "pt1", 00:15:14.844 "uuid": "e8ac5805-ba1b-556f-8360-6def399031de", 00:15:14.844 "is_configured": true, 00:15:14.844 "data_offset": 2048, 00:15:14.844 "data_size": 63488 00:15:14.844 }, 00:15:14.844 { 00:15:14.844 "name": "pt2", 00:15:14.844 "uuid": "a54fb879-1b58-58b6-a530-59945e1e14c9", 00:15:14.844 "is_configured": true, 00:15:14.844 "data_offset": 2048, 00:15:14.844 "data_size": 63488 00:15:14.844 } 00:15:14.844 ] 00:15:14.844 } 00:15:14.844 } 00:15:14.844 }' 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:14.844 pt2' 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:14.844 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:15.102 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:15.102 "name": "pt1", 00:15:15.102 "aliases": [ 00:15:15.102 "e8ac5805-ba1b-556f-8360-6def399031de" 00:15:15.102 ], 00:15:15.102 "product_name": "passthru", 00:15:15.102 "block_size": 512, 00:15:15.102 "num_blocks": 65536, 00:15:15.102 "uuid": "e8ac5805-ba1b-556f-8360-6def399031de", 00:15:15.102 "assigned_rate_limits": { 00:15:15.102 "rw_ios_per_sec": 0, 00:15:15.102 "rw_mbytes_per_sec": 0, 00:15:15.102 "r_mbytes_per_sec": 0, 00:15:15.102 "w_mbytes_per_sec": 0 00:15:15.102 }, 00:15:15.102 "claimed": true, 00:15:15.102 "claim_type": "exclusive_write", 00:15:15.102 "zoned": false, 00:15:15.102 "supported_io_types": { 00:15:15.102 "read": true, 00:15:15.102 "write": true, 00:15:15.102 "unmap": true, 00:15:15.102 "write_zeroes": true, 00:15:15.102 "flush": true, 00:15:15.102 "reset": true, 00:15:15.102 "compare": false, 00:15:15.102 "compare_and_write": false, 00:15:15.102 "abort": true, 00:15:15.102 "nvme_admin": false, 00:15:15.102 "nvme_io": false 00:15:15.102 }, 00:15:15.102 "memory_domains": [ 00:15:15.102 { 00:15:15.102 "dma_device_id": "system", 00:15:15.102 "dma_device_type": 1 00:15:15.102 }, 00:15:15.102 { 00:15:15.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.102 "dma_device_type": 2 00:15:15.102 } 00:15:15.102 ], 00:15:15.102 "driver_specific": { 00:15:15.102 "passthru": { 00:15:15.102 "name": "pt1", 00:15:15.102 "base_bdev_name": "malloc1" 00:15:15.102 } 00:15:15.102 } 00:15:15.102 }' 00:15:15.102 11:57:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.364 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.364 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:15.364 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.364 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.364 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:15.364 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.364 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.621 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:15.621 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.621 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.621 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:15.621 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:15.621 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:15.621 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:15.880 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:15.880 "name": "pt2", 00:15:15.880 "aliases": [ 00:15:15.880 "a54fb879-1b58-58b6-a530-59945e1e14c9" 00:15:15.880 ], 00:15:15.880 "product_name": "passthru", 00:15:15.880 "block_size": 512, 00:15:15.880 "num_blocks": 65536, 00:15:15.880 "uuid": "a54fb879-1b58-58b6-a530-59945e1e14c9", 00:15:15.880 "assigned_rate_limits": { 00:15:15.880 "rw_ios_per_sec": 0, 00:15:15.880 "rw_mbytes_per_sec": 0, 00:15:15.880 "r_mbytes_per_sec": 0, 00:15:15.880 "w_mbytes_per_sec": 0 00:15:15.880 }, 00:15:15.880 "claimed": true, 00:15:15.880 "claim_type": "exclusive_write", 00:15:15.880 "zoned": false, 00:15:15.880 "supported_io_types": { 00:15:15.880 "read": true, 00:15:15.880 "write": true, 00:15:15.880 "unmap": true, 00:15:15.880 "write_zeroes": true, 00:15:15.880 "flush": true, 00:15:15.880 "reset": true, 00:15:15.880 "compare": false, 00:15:15.880 "compare_and_write": false, 00:15:15.880 "abort": true, 00:15:15.880 "nvme_admin": false, 00:15:15.880 "nvme_io": false 00:15:15.880 }, 00:15:15.880 "memory_domains": [ 00:15:15.880 { 00:15:15.880 "dma_device_id": "system", 00:15:15.880 "dma_device_type": 1 00:15:15.880 }, 00:15:15.880 { 00:15:15.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.880 "dma_device_type": 2 00:15:15.880 } 00:15:15.880 ], 00:15:15.880 "driver_specific": { 00:15:15.880 "passthru": { 00:15:15.880 "name": "pt2", 00:15:15.880 "base_bdev_name": "malloc2" 00:15:15.880 } 00:15:15.880 } 00:15:15.880 }' 00:15:15.880 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.880 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:15.880 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:15.880 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.137 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:16.137 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:16.137 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.137 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:16.137 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:16.137 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.137 11:57:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:16.394 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:16.394 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:16.394 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:16.394 [2024-07-21 11:57:15.259008] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 95bd6321-c260-476f-ad2d-1ba63e188c88 '!=' 95bd6321-c260-476f-ad2d-1ba63e188c88 ']' 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 133411 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 133411 ']' 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 133411 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133411 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133411' 00:15:16.653 killing process with pid 133411 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 133411 00:15:16.653 [2024-07-21 11:57:15.308687] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.653 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 133411 00:15:16.653 [2024-07-21 11:57:15.308822] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.653 [2024-07-21 11:57:15.308904] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.653 [2024-07-21 11:57:15.308924] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:15:16.653 [2024-07-21 11:57:15.341281] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.911 ************************************ 00:15:16.911 END TEST raid_superblock_test 00:15:16.911 ************************************ 00:15:16.911 11:57:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:16.911 00:15:16.911 real 0m11.429s 00:15:16.911 user 0m21.035s 00:15:16.911 sys 0m1.419s 00:15:16.911 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:16.911 11:57:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.911 11:57:15 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:15:16.911 11:57:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:16.911 11:57:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:16.911 11:57:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.911 ************************************ 00:15:16.911 START TEST raid_read_error_test 00:15:16.911 ************************************ 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 read 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:16.911 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.iLLsp86QkS 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=133793 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 133793 /var/tmp/spdk-raid.sock 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 133793 ']' 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:17.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:17.178 11:57:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.178 [2024-07-21 11:57:15.844604] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:17.178 [2024-07-21 11:57:15.844841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133793 ] 00:15:17.178 [2024-07-21 11:57:16.012924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.449 [2024-07-21 11:57:16.112772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.449 [2024-07-21 11:57:16.172316] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.016 11:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:18.016 11:57:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:18.016 11:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:18.016 11:57:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:18.274 BaseBdev1_malloc 00:15:18.274 11:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:18.531 true 00:15:18.531 11:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:18.790 [2024-07-21 11:57:17.584883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:18.790 [2024-07-21 11:57:17.585067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.790 [2024-07-21 11:57:17.585223] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:18.790 [2024-07-21 11:57:17.585304] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.790 [2024-07-21 11:57:17.588896] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.790 [2024-07-21 11:57:17.588973] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.790 BaseBdev1 00:15:18.790 11:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:18.790 11:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:19.048 BaseBdev2_malloc 00:15:19.048 11:57:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:19.306 true 00:15:19.306 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:19.564 [2024-07-21 11:57:18.355530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:19.564 [2024-07-21 11:57:18.355696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.564 [2024-07-21 11:57:18.355785] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:15:19.564 [2024-07-21 11:57:18.355885] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.564 [2024-07-21 11:57:18.359371] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.564 [2024-07-21 11:57:18.359446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:19.564 BaseBdev2 00:15:19.564 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:19.822 [2024-07-21 11:57:18.608145] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.822 [2024-07-21 11:57:18.611389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.822 [2024-07-21 11:57:18.611738] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:15:19.822 [2024-07-21 11:57:18.611765] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:19.822 [2024-07-21 11:57:18.611971] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:19.822 [2024-07-21 11:57:18.612569] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:15:19.822 [2024-07-21 11:57:18.612591] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:15:19.822 [2024-07-21 11:57:18.612902] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.822 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.081 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.081 "name": "raid_bdev1", 00:15:20.081 "uuid": "0060cc5d-f07d-4725-a936-b557b6fd7079", 00:15:20.081 "strip_size_kb": 64, 00:15:20.081 "state": "online", 00:15:20.081 "raid_level": "concat", 00:15:20.081 "superblock": true, 00:15:20.081 "num_base_bdevs": 2, 00:15:20.081 "num_base_bdevs_discovered": 2, 00:15:20.081 "num_base_bdevs_operational": 2, 00:15:20.081 "base_bdevs_list": [ 00:15:20.081 { 00:15:20.081 "name": "BaseBdev1", 00:15:20.081 "uuid": "059ee01a-f8ef-5fe7-9bd3-596e389119fe", 00:15:20.081 "is_configured": true, 00:15:20.081 "data_offset": 2048, 00:15:20.081 "data_size": 63488 00:15:20.081 }, 00:15:20.081 { 00:15:20.081 "name": "BaseBdev2", 00:15:20.081 "uuid": "8e6a9a2b-91f8-51f0-a5d2-94725fda69a9", 00:15:20.081 "is_configured": true, 00:15:20.081 "data_offset": 2048, 00:15:20.081 "data_size": 63488 00:15:20.081 } 00:15:20.081 ] 00:15:20.081 }' 00:15:20.081 11:57:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.081 11:57:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.646 11:57:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:20.646 11:57:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:20.904 [2024-07-21 11:57:19.592924] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:21.854 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.111 11:57:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.369 11:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.370 "name": "raid_bdev1", 00:15:22.370 "uuid": "0060cc5d-f07d-4725-a936-b557b6fd7079", 00:15:22.370 "strip_size_kb": 64, 00:15:22.370 "state": "online", 00:15:22.370 "raid_level": "concat", 00:15:22.370 "superblock": true, 00:15:22.370 "num_base_bdevs": 2, 00:15:22.370 "num_base_bdevs_discovered": 2, 00:15:22.370 "num_base_bdevs_operational": 2, 00:15:22.370 "base_bdevs_list": [ 00:15:22.370 { 00:15:22.370 "name": "BaseBdev1", 00:15:22.370 "uuid": "059ee01a-f8ef-5fe7-9bd3-596e389119fe", 00:15:22.370 "is_configured": true, 00:15:22.370 "data_offset": 2048, 00:15:22.370 "data_size": 63488 00:15:22.370 }, 00:15:22.370 { 00:15:22.370 "name": "BaseBdev2", 00:15:22.370 "uuid": "8e6a9a2b-91f8-51f0-a5d2-94725fda69a9", 00:15:22.370 "is_configured": true, 00:15:22.370 "data_offset": 2048, 00:15:22.370 "data_size": 63488 00:15:22.370 } 00:15:22.370 ] 00:15:22.370 }' 00:15:22.370 11:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.370 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.936 11:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:23.195 [2024-07-21 11:57:21.877881] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.195 [2024-07-21 11:57:21.877964] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.195 [2024-07-21 11:57:21.881375] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.195 [2024-07-21 11:57:21.881447] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.195 [2024-07-21 11:57:21.881492] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.195 [2024-07-21 11:57:21.881503] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:15:23.195 0 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 133793 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 133793 ']' 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 133793 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133793 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133793' 00:15:23.195 killing process with pid 133793 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 133793 00:15:23.195 [2024-07-21 11:57:21.922279] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.195 11:57:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 133793 00:15:23.195 [2024-07-21 11:57:21.945323] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.761 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:23.761 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.iLLsp86QkS 00:15:23.761 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:23.761 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.44 00:15:23.762 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:23.762 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:23.762 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:23.762 11:57:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.44 != \0\.\0\0 ]] 00:15:23.762 00:15:23.762 real 0m6.568s 00:15:23.762 user 0m10.338s 00:15:23.762 sys 0m0.977s 00:15:23.762 11:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:23.762 11:57:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 ************************************ 00:15:23.762 END TEST raid_read_error_test 00:15:23.762 ************************************ 00:15:23.762 11:57:22 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:15:23.762 11:57:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:23.762 11:57:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:23.762 11:57:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 ************************************ 00:15:23.762 START TEST raid_write_error_test 00:15:23.762 ************************************ 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 2 write 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.bhKJHnUDTV 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=133972 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 133972 /var/tmp/spdk-raid.sock 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 133972 ']' 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:23.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:23.762 11:57:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.762 [2024-07-21 11:57:22.476058] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:23.762 [2024-07-21 11:57:22.476327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133972 ] 00:15:24.020 [2024-07-21 11:57:22.647533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.020 [2024-07-21 11:57:22.770012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.020 [2024-07-21 11:57:22.860922] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.585 11:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:24.585 11:57:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:15:24.843 11:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:24.843 11:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:25.100 BaseBdev1_malloc 00:15:25.100 11:57:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:25.358 true 00:15:25.358 11:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:25.616 [2024-07-21 11:57:24.271224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:25.616 [2024-07-21 11:57:24.271682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.616 [2024-07-21 11:57:24.271805] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:25.616 [2024-07-21 11:57:24.272175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.616 [2024-07-21 11:57:24.275953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.616 [2024-07-21 11:57:24.276141] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.616 BaseBdev1 00:15:25.616 11:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:25.616 11:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:25.875 BaseBdev2_malloc 00:15:25.875 11:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:26.134 true 00:15:26.134 11:57:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:26.392 [2024-07-21 11:57:25.035878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:26.392 [2024-07-21 11:57:25.036159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.392 [2024-07-21 11:57:25.036277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:15:26.392 [2024-07-21 11:57:25.036597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.392 [2024-07-21 11:57:25.039527] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.392 [2024-07-21 11:57:25.039740] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.392 BaseBdev2 00:15:26.392 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:15:26.650 [2024-07-21 11:57:25.328289] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.650 [2024-07-21 11:57:25.331191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.650 [2024-07-21 11:57:25.331628] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:15:26.650 [2024-07-21 11:57:25.331785] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:26.650 [2024-07-21 11:57:25.332065] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:26.650 [2024-07-21 11:57:25.332592] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:15:26.650 [2024-07-21 11:57:25.332735] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:15:26.650 [2024-07-21 11:57:25.333123] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.650 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.908 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.908 "name": "raid_bdev1", 00:15:26.908 "uuid": "996e5b41-5dde-419b-ad32-de90b2f44666", 00:15:26.908 "strip_size_kb": 64, 00:15:26.908 "state": "online", 00:15:26.908 "raid_level": "concat", 00:15:26.908 "superblock": true, 00:15:26.908 "num_base_bdevs": 2, 00:15:26.908 "num_base_bdevs_discovered": 2, 00:15:26.908 "num_base_bdevs_operational": 2, 00:15:26.908 "base_bdevs_list": [ 00:15:26.908 { 00:15:26.908 "name": "BaseBdev1", 00:15:26.908 "uuid": "c40c57f8-1014-5f4f-a90a-6eac6e684a51", 00:15:26.908 "is_configured": true, 00:15:26.908 "data_offset": 2048, 00:15:26.908 "data_size": 63488 00:15:26.908 }, 00:15:26.908 { 00:15:26.908 "name": "BaseBdev2", 00:15:26.908 "uuid": "d79d7b55-fe92-5de5-9db0-26c4d7cef5f7", 00:15:26.908 "is_configured": true, 00:15:26.908 "data_offset": 2048, 00:15:26.908 "data_size": 63488 00:15:26.908 } 00:15:26.908 ] 00:15:26.908 }' 00:15:26.908 11:57:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.908 11:57:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.509 11:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:27.509 11:57:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:27.509 [2024-07-21 11:57:26.313863] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:28.443 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.701 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.702 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.960 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.960 "name": "raid_bdev1", 00:15:28.960 "uuid": "996e5b41-5dde-419b-ad32-de90b2f44666", 00:15:28.960 "strip_size_kb": 64, 00:15:28.960 "state": "online", 00:15:28.960 "raid_level": "concat", 00:15:28.960 "superblock": true, 00:15:28.960 "num_base_bdevs": 2, 00:15:28.960 "num_base_bdevs_discovered": 2, 00:15:28.960 "num_base_bdevs_operational": 2, 00:15:28.960 "base_bdevs_list": [ 00:15:28.960 { 00:15:28.960 "name": "BaseBdev1", 00:15:28.960 "uuid": "c40c57f8-1014-5f4f-a90a-6eac6e684a51", 00:15:28.960 "is_configured": true, 00:15:28.960 "data_offset": 2048, 00:15:28.960 "data_size": 63488 00:15:28.960 }, 00:15:28.960 { 00:15:28.960 "name": "BaseBdev2", 00:15:28.960 "uuid": "d79d7b55-fe92-5de5-9db0-26c4d7cef5f7", 00:15:28.960 "is_configured": true, 00:15:28.960 "data_offset": 2048, 00:15:28.960 "data_size": 63488 00:15:28.960 } 00:15:28.960 ] 00:15:28.960 }' 00:15:28.960 11:57:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.960 11:57:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:29.895 [2024-07-21 11:57:28.662555] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:29.895 [2024-07-21 11:57:28.662640] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.895 [2024-07-21 11:57:28.665444] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.895 [2024-07-21 11:57:28.665511] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:29.895 [2024-07-21 11:57:28.665552] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:29.895 [2024-07-21 11:57:28.665564] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:15:29.895 0 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 133972 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 133972 ']' 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 133972 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 133972 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 133972' 00:15:29.895 killing process with pid 133972 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 133972 00:15:29.895 [2024-07-21 11:57:28.702140] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:29.895 11:57:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 133972 00:15:29.895 [2024-07-21 11:57:28.721026] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.459 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.bhKJHnUDTV 00:15:30.459 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:30.459 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:30.459 ************************************ 00:15:30.459 END TEST raid_write_error_test 00:15:30.459 ************************************ 00:15:30.459 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:15:30.459 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:30.460 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:30.460 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:30.460 11:57:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:15:30.460 00:15:30.460 real 0m6.667s 00:15:30.460 user 0m10.580s 00:15:30.460 sys 0m0.967s 00:15:30.460 11:57:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:30.460 11:57:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.460 11:57:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:30.460 11:57:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:15:30.460 11:57:29 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:30.460 11:57:29 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:30.460 11:57:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.460 ************************************ 00:15:30.460 START TEST raid_state_function_test 00:15:30.460 ************************************ 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 false 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=134162 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134162' 00:15:30.460 Process raid pid: 134162 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 134162 /var/tmp/spdk-raid.sock 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 134162 ']' 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:30.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:30.460 11:57:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.460 [2024-07-21 11:57:29.193848] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:30.460 [2024-07-21 11:57:29.194099] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.717 [2024-07-21 11:57:29.358943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.717 [2024-07-21 11:57:29.474239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.717 [2024-07-21 11:57:29.551341] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:31.650 [2024-07-21 11:57:30.433777] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.650 [2024-07-21 11:57:30.433904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.650 [2024-07-21 11:57:30.433931] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.650 [2024-07-21 11:57:30.433957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.650 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.908 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.908 "name": "Existed_Raid", 00:15:31.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.908 "strip_size_kb": 0, 00:15:31.908 "state": "configuring", 00:15:31.908 "raid_level": "raid1", 00:15:31.908 "superblock": false, 00:15:31.908 "num_base_bdevs": 2, 00:15:31.908 "num_base_bdevs_discovered": 0, 00:15:31.908 "num_base_bdevs_operational": 2, 00:15:31.908 "base_bdevs_list": [ 00:15:31.908 { 00:15:31.908 "name": "BaseBdev1", 00:15:31.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.908 "is_configured": false, 00:15:31.908 "data_offset": 0, 00:15:31.908 "data_size": 0 00:15:31.908 }, 00:15:31.908 { 00:15:31.908 "name": "BaseBdev2", 00:15:31.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.908 "is_configured": false, 00:15:31.908 "data_offset": 0, 00:15:31.908 "data_size": 0 00:15:31.908 } 00:15:31.908 ] 00:15:31.908 }' 00:15:31.908 11:57:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.908 11:57:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.474 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.732 [2024-07-21 11:57:31.529928] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.732 [2024-07-21 11:57:31.530006] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:32.732 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:32.990 [2024-07-21 11:57:31.761963] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.990 [2024-07-21 11:57:31.762120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.990 [2024-07-21 11:57:31.762145] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.990 [2024-07-21 11:57:31.762180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.990 11:57:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.248 [2024-07-21 11:57:32.048150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.248 BaseBdev1 00:15:33.248 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:33.249 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:33.249 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:33.249 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:33.249 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:33.249 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:33.249 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.507 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.765 [ 00:15:33.765 { 00:15:33.765 "name": "BaseBdev1", 00:15:33.765 "aliases": [ 00:15:33.765 "51862a81-d4f3-4cad-90bf-bc1bc615123b" 00:15:33.765 ], 00:15:33.765 "product_name": "Malloc disk", 00:15:33.765 "block_size": 512, 00:15:33.765 "num_blocks": 65536, 00:15:33.765 "uuid": "51862a81-d4f3-4cad-90bf-bc1bc615123b", 00:15:33.765 "assigned_rate_limits": { 00:15:33.765 "rw_ios_per_sec": 0, 00:15:33.765 "rw_mbytes_per_sec": 0, 00:15:33.765 "r_mbytes_per_sec": 0, 00:15:33.765 "w_mbytes_per_sec": 0 00:15:33.765 }, 00:15:33.765 "claimed": true, 00:15:33.765 "claim_type": "exclusive_write", 00:15:33.765 "zoned": false, 00:15:33.765 "supported_io_types": { 00:15:33.765 "read": true, 00:15:33.765 "write": true, 00:15:33.765 "unmap": true, 00:15:33.765 "write_zeroes": true, 00:15:33.765 "flush": true, 00:15:33.765 "reset": true, 00:15:33.765 "compare": false, 00:15:33.765 "compare_and_write": false, 00:15:33.765 "abort": true, 00:15:33.765 "nvme_admin": false, 00:15:33.765 "nvme_io": false 00:15:33.765 }, 00:15:33.765 "memory_domains": [ 00:15:33.765 { 00:15:33.765 "dma_device_id": "system", 00:15:33.765 "dma_device_type": 1 00:15:33.765 }, 00:15:33.765 { 00:15:33.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.765 "dma_device_type": 2 00:15:33.765 } 00:15:33.765 ], 00:15:33.765 "driver_specific": {} 00:15:33.765 } 00:15:33.765 ] 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.765 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.023 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.023 "name": "Existed_Raid", 00:15:34.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.023 "strip_size_kb": 0, 00:15:34.023 "state": "configuring", 00:15:34.023 "raid_level": "raid1", 00:15:34.023 "superblock": false, 00:15:34.023 "num_base_bdevs": 2, 00:15:34.023 "num_base_bdevs_discovered": 1, 00:15:34.023 "num_base_bdevs_operational": 2, 00:15:34.023 "base_bdevs_list": [ 00:15:34.023 { 00:15:34.023 "name": "BaseBdev1", 00:15:34.023 "uuid": "51862a81-d4f3-4cad-90bf-bc1bc615123b", 00:15:34.023 "is_configured": true, 00:15:34.023 "data_offset": 0, 00:15:34.023 "data_size": 65536 00:15:34.023 }, 00:15:34.023 { 00:15:34.023 "name": "BaseBdev2", 00:15:34.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.023 "is_configured": false, 00:15:34.023 "data_offset": 0, 00:15:34.023 "data_size": 0 00:15:34.023 } 00:15:34.023 ] 00:15:34.023 }' 00:15:34.023 11:57:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.023 11:57:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.589 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:34.846 [2024-07-21 11:57:33.632687] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.846 [2024-07-21 11:57:33.632803] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:34.846 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:35.105 [2024-07-21 11:57:33.856770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:35.105 [2024-07-21 11:57:33.859111] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:35.105 [2024-07-21 11:57:33.859186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.105 11:57:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.363 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.363 "name": "Existed_Raid", 00:15:35.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.363 "strip_size_kb": 0, 00:15:35.363 "state": "configuring", 00:15:35.363 "raid_level": "raid1", 00:15:35.363 "superblock": false, 00:15:35.363 "num_base_bdevs": 2, 00:15:35.363 "num_base_bdevs_discovered": 1, 00:15:35.363 "num_base_bdevs_operational": 2, 00:15:35.363 "base_bdevs_list": [ 00:15:35.363 { 00:15:35.363 "name": "BaseBdev1", 00:15:35.363 "uuid": "51862a81-d4f3-4cad-90bf-bc1bc615123b", 00:15:35.363 "is_configured": true, 00:15:35.363 "data_offset": 0, 00:15:35.363 "data_size": 65536 00:15:35.363 }, 00:15:35.363 { 00:15:35.363 "name": "BaseBdev2", 00:15:35.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.363 "is_configured": false, 00:15:35.363 "data_offset": 0, 00:15:35.363 "data_size": 0 00:15:35.363 } 00:15:35.363 ] 00:15:35.363 }' 00:15:35.363 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.363 11:57:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.297 11:57:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:36.297 [2024-07-21 11:57:35.116132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.297 [2024-07-21 11:57:35.116240] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:36.297 [2024-07-21 11:57:35.116259] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:36.297 [2024-07-21 11:57:35.116465] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:36.297 [2024-07-21 11:57:35.117154] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:36.297 [2024-07-21 11:57:35.117186] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:36.297 [2024-07-21 11:57:35.117636] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.297 BaseBdev2 00:15:36.297 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:36.297 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:36.297 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:36.297 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:15:36.297 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:36.297 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:36.297 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.554 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.812 [ 00:15:36.812 { 00:15:36.812 "name": "BaseBdev2", 00:15:36.812 "aliases": [ 00:15:36.812 "f1c83bd3-e67e-4f90-88ef-8a4f62d7a11f" 00:15:36.812 ], 00:15:36.812 "product_name": "Malloc disk", 00:15:36.812 "block_size": 512, 00:15:36.812 "num_blocks": 65536, 00:15:36.812 "uuid": "f1c83bd3-e67e-4f90-88ef-8a4f62d7a11f", 00:15:36.812 "assigned_rate_limits": { 00:15:36.812 "rw_ios_per_sec": 0, 00:15:36.812 "rw_mbytes_per_sec": 0, 00:15:36.812 "r_mbytes_per_sec": 0, 00:15:36.812 "w_mbytes_per_sec": 0 00:15:36.812 }, 00:15:36.812 "claimed": true, 00:15:36.812 "claim_type": "exclusive_write", 00:15:36.812 "zoned": false, 00:15:36.812 "supported_io_types": { 00:15:36.812 "read": true, 00:15:36.812 "write": true, 00:15:36.812 "unmap": true, 00:15:36.812 "write_zeroes": true, 00:15:36.812 "flush": true, 00:15:36.812 "reset": true, 00:15:36.812 "compare": false, 00:15:36.812 "compare_and_write": false, 00:15:36.812 "abort": true, 00:15:36.812 "nvme_admin": false, 00:15:36.812 "nvme_io": false 00:15:36.812 }, 00:15:36.812 "memory_domains": [ 00:15:36.812 { 00:15:36.812 "dma_device_id": "system", 00:15:36.812 "dma_device_type": 1 00:15:36.812 }, 00:15:36.812 { 00:15:36.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.812 "dma_device_type": 2 00:15:36.812 } 00:15:36.812 ], 00:15:36.812 "driver_specific": {} 00:15:36.812 } 00:15:36.812 ] 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.812 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.070 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.070 "name": "Existed_Raid", 00:15:37.070 "uuid": "04b11e4c-2795-424f-b894-f0c2a5998854", 00:15:37.070 "strip_size_kb": 0, 00:15:37.070 "state": "online", 00:15:37.070 "raid_level": "raid1", 00:15:37.070 "superblock": false, 00:15:37.070 "num_base_bdevs": 2, 00:15:37.070 "num_base_bdevs_discovered": 2, 00:15:37.070 "num_base_bdevs_operational": 2, 00:15:37.070 "base_bdevs_list": [ 00:15:37.070 { 00:15:37.070 "name": "BaseBdev1", 00:15:37.070 "uuid": "51862a81-d4f3-4cad-90bf-bc1bc615123b", 00:15:37.070 "is_configured": true, 00:15:37.070 "data_offset": 0, 00:15:37.070 "data_size": 65536 00:15:37.070 }, 00:15:37.070 { 00:15:37.070 "name": "BaseBdev2", 00:15:37.070 "uuid": "f1c83bd3-e67e-4f90-88ef-8a4f62d7a11f", 00:15:37.070 "is_configured": true, 00:15:37.070 "data_offset": 0, 00:15:37.070 "data_size": 65536 00:15:37.070 } 00:15:37.070 ] 00:15:37.070 }' 00:15:37.070 11:57:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.070 11:57:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:37.635 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:37.892 [2024-07-21 11:57:36.684951] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.892 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:37.892 "name": "Existed_Raid", 00:15:37.892 "aliases": [ 00:15:37.892 "04b11e4c-2795-424f-b894-f0c2a5998854" 00:15:37.892 ], 00:15:37.892 "product_name": "Raid Volume", 00:15:37.892 "block_size": 512, 00:15:37.892 "num_blocks": 65536, 00:15:37.892 "uuid": "04b11e4c-2795-424f-b894-f0c2a5998854", 00:15:37.892 "assigned_rate_limits": { 00:15:37.892 "rw_ios_per_sec": 0, 00:15:37.892 "rw_mbytes_per_sec": 0, 00:15:37.892 "r_mbytes_per_sec": 0, 00:15:37.892 "w_mbytes_per_sec": 0 00:15:37.892 }, 00:15:37.892 "claimed": false, 00:15:37.892 "zoned": false, 00:15:37.892 "supported_io_types": { 00:15:37.892 "read": true, 00:15:37.892 "write": true, 00:15:37.893 "unmap": false, 00:15:37.893 "write_zeroes": true, 00:15:37.893 "flush": false, 00:15:37.893 "reset": true, 00:15:37.893 "compare": false, 00:15:37.893 "compare_and_write": false, 00:15:37.893 "abort": false, 00:15:37.893 "nvme_admin": false, 00:15:37.893 "nvme_io": false 00:15:37.893 }, 00:15:37.893 "memory_domains": [ 00:15:37.893 { 00:15:37.893 "dma_device_id": "system", 00:15:37.893 "dma_device_type": 1 00:15:37.893 }, 00:15:37.893 { 00:15:37.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.893 "dma_device_type": 2 00:15:37.893 }, 00:15:37.893 { 00:15:37.893 "dma_device_id": "system", 00:15:37.893 "dma_device_type": 1 00:15:37.893 }, 00:15:37.893 { 00:15:37.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.893 "dma_device_type": 2 00:15:37.893 } 00:15:37.893 ], 00:15:37.893 "driver_specific": { 00:15:37.893 "raid": { 00:15:37.893 "uuid": "04b11e4c-2795-424f-b894-f0c2a5998854", 00:15:37.893 "strip_size_kb": 0, 00:15:37.893 "state": "online", 00:15:37.893 "raid_level": "raid1", 00:15:37.893 "superblock": false, 00:15:37.893 "num_base_bdevs": 2, 00:15:37.893 "num_base_bdevs_discovered": 2, 00:15:37.893 "num_base_bdevs_operational": 2, 00:15:37.893 "base_bdevs_list": [ 00:15:37.893 { 00:15:37.893 "name": "BaseBdev1", 00:15:37.893 "uuid": "51862a81-d4f3-4cad-90bf-bc1bc615123b", 00:15:37.893 "is_configured": true, 00:15:37.893 "data_offset": 0, 00:15:37.893 "data_size": 65536 00:15:37.893 }, 00:15:37.893 { 00:15:37.893 "name": "BaseBdev2", 00:15:37.893 "uuid": "f1c83bd3-e67e-4f90-88ef-8a4f62d7a11f", 00:15:37.893 "is_configured": true, 00:15:37.893 "data_offset": 0, 00:15:37.893 "data_size": 65536 00:15:37.893 } 00:15:37.893 ] 00:15:37.893 } 00:15:37.893 } 00:15:37.893 }' 00:15:37.893 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:37.893 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:37.893 BaseBdev2' 00:15:37.893 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:37.893 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:37.893 11:57:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.457 "name": "BaseBdev1", 00:15:38.457 "aliases": [ 00:15:38.457 "51862a81-d4f3-4cad-90bf-bc1bc615123b" 00:15:38.457 ], 00:15:38.457 "product_name": "Malloc disk", 00:15:38.457 "block_size": 512, 00:15:38.457 "num_blocks": 65536, 00:15:38.457 "uuid": "51862a81-d4f3-4cad-90bf-bc1bc615123b", 00:15:38.457 "assigned_rate_limits": { 00:15:38.457 "rw_ios_per_sec": 0, 00:15:38.457 "rw_mbytes_per_sec": 0, 00:15:38.457 "r_mbytes_per_sec": 0, 00:15:38.457 "w_mbytes_per_sec": 0 00:15:38.457 }, 00:15:38.457 "claimed": true, 00:15:38.457 "claim_type": "exclusive_write", 00:15:38.457 "zoned": false, 00:15:38.457 "supported_io_types": { 00:15:38.457 "read": true, 00:15:38.457 "write": true, 00:15:38.457 "unmap": true, 00:15:38.457 "write_zeroes": true, 00:15:38.457 "flush": true, 00:15:38.457 "reset": true, 00:15:38.457 "compare": false, 00:15:38.457 "compare_and_write": false, 00:15:38.457 "abort": true, 00:15:38.457 "nvme_admin": false, 00:15:38.457 "nvme_io": false 00:15:38.457 }, 00:15:38.457 "memory_domains": [ 00:15:38.457 { 00:15:38.457 "dma_device_id": "system", 00:15:38.457 "dma_device_type": 1 00:15:38.457 }, 00:15:38.457 { 00:15:38.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.457 "dma_device_type": 2 00:15:38.457 } 00:15:38.457 ], 00:15:38.457 "driver_specific": {} 00:15:38.457 }' 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.457 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:38.715 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:38.715 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.715 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:38.715 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:38.715 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:38.715 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:38.715 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:38.973 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:38.973 "name": "BaseBdev2", 00:15:38.973 "aliases": [ 00:15:38.973 "f1c83bd3-e67e-4f90-88ef-8a4f62d7a11f" 00:15:38.973 ], 00:15:38.973 "product_name": "Malloc disk", 00:15:38.973 "block_size": 512, 00:15:38.973 "num_blocks": 65536, 00:15:38.973 "uuid": "f1c83bd3-e67e-4f90-88ef-8a4f62d7a11f", 00:15:38.973 "assigned_rate_limits": { 00:15:38.973 "rw_ios_per_sec": 0, 00:15:38.973 "rw_mbytes_per_sec": 0, 00:15:38.973 "r_mbytes_per_sec": 0, 00:15:38.973 "w_mbytes_per_sec": 0 00:15:38.973 }, 00:15:38.973 "claimed": true, 00:15:38.973 "claim_type": "exclusive_write", 00:15:38.973 "zoned": false, 00:15:38.973 "supported_io_types": { 00:15:38.973 "read": true, 00:15:38.973 "write": true, 00:15:38.973 "unmap": true, 00:15:38.973 "write_zeroes": true, 00:15:38.973 "flush": true, 00:15:38.973 "reset": true, 00:15:38.973 "compare": false, 00:15:38.973 "compare_and_write": false, 00:15:38.973 "abort": true, 00:15:38.973 "nvme_admin": false, 00:15:38.973 "nvme_io": false 00:15:38.973 }, 00:15:38.973 "memory_domains": [ 00:15:38.973 { 00:15:38.973 "dma_device_id": "system", 00:15:38.973 "dma_device_type": 1 00:15:38.973 }, 00:15:38.973 { 00:15:38.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.973 "dma_device_type": 2 00:15:38.973 } 00:15:38.973 ], 00:15:38.973 "driver_specific": {} 00:15:38.973 }' 00:15:38.973 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.973 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:38.973 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:38.973 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.230 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:39.230 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:39.230 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.230 11:57:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:39.230 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:39.230 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.230 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:39.487 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:39.487 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:39.744 [2024-07-21 11:57:38.369259] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.744 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.010 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.010 "name": "Existed_Raid", 00:15:40.010 "uuid": "04b11e4c-2795-424f-b894-f0c2a5998854", 00:15:40.010 "strip_size_kb": 0, 00:15:40.010 "state": "online", 00:15:40.010 "raid_level": "raid1", 00:15:40.010 "superblock": false, 00:15:40.010 "num_base_bdevs": 2, 00:15:40.010 "num_base_bdevs_discovered": 1, 00:15:40.010 "num_base_bdevs_operational": 1, 00:15:40.010 "base_bdevs_list": [ 00:15:40.010 { 00:15:40.010 "name": null, 00:15:40.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.010 "is_configured": false, 00:15:40.010 "data_offset": 0, 00:15:40.010 "data_size": 65536 00:15:40.010 }, 00:15:40.010 { 00:15:40.010 "name": "BaseBdev2", 00:15:40.010 "uuid": "f1c83bd3-e67e-4f90-88ef-8a4f62d7a11f", 00:15:40.010 "is_configured": true, 00:15:40.010 "data_offset": 0, 00:15:40.010 "data_size": 65536 00:15:40.010 } 00:15:40.010 ] 00:15:40.010 }' 00:15:40.010 11:57:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.010 11:57:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.574 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:40.574 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:40.574 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.574 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:40.832 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:40.832 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.832 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:41.090 [2024-07-21 11:57:39.851707] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.090 [2024-07-21 11:57:39.852178] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.090 [2024-07-21 11:57:39.870915] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.090 [2024-07-21 11:57:39.871323] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.090 [2024-07-21 11:57:39.871479] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:41.090 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:41.090 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:41.090 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.090 11:57:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 134162 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 134162 ']' 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 134162 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134162 00:15:41.348 killing process with pid 134162 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134162' 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 134162 00:15:41.348 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 134162 00:15:41.348 [2024-07-21 11:57:40.185095] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.348 [2024-07-21 11:57:40.185234] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.914 ************************************ 00:15:41.914 END TEST raid_state_function_test 00:15:41.914 ************************************ 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:41.914 00:15:41.914 real 0m11.378s 00:15:41.914 user 0m20.866s 00:15:41.914 sys 0m1.391s 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.914 11:57:40 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:41.914 11:57:40 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:41.914 11:57:40 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:41.914 11:57:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.914 ************************************ 00:15:41.914 START TEST raid_state_function_test_sb 00:15:41.914 ************************************ 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:41.914 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=134538 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 134538' 00:15:41.915 Process raid pid: 134538 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 134538 /var/tmp/spdk-raid.sock 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 134538 ']' 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:41.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:41.915 11:57:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.915 [2024-07-21 11:57:40.632096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:41.915 [2024-07-21 11:57:40.632626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.173 [2024-07-21 11:57:40.800059] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.173 [2024-07-21 11:57:40.923257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.173 [2024-07-21 11:57:40.996765] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.738 11:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:42.738 11:57:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:15:42.738 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:43.005 [2024-07-21 11:57:41.834759] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.005 [2024-07-21 11:57:41.835213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.005 [2024-07-21 11:57:41.835332] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.005 [2024-07-21 11:57:41.835483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.005 11:57:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.276 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.276 "name": "Existed_Raid", 00:15:43.276 "uuid": "273e9729-03d1-4c03-ae45-cb9802412f72", 00:15:43.276 "strip_size_kb": 0, 00:15:43.276 "state": "configuring", 00:15:43.276 "raid_level": "raid1", 00:15:43.276 "superblock": true, 00:15:43.276 "num_base_bdevs": 2, 00:15:43.276 "num_base_bdevs_discovered": 0, 00:15:43.276 "num_base_bdevs_operational": 2, 00:15:43.276 "base_bdevs_list": [ 00:15:43.276 { 00:15:43.276 "name": "BaseBdev1", 00:15:43.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.276 "is_configured": false, 00:15:43.276 "data_offset": 0, 00:15:43.276 "data_size": 0 00:15:43.276 }, 00:15:43.276 { 00:15:43.276 "name": "BaseBdev2", 00:15:43.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.276 "is_configured": false, 00:15:43.276 "data_offset": 0, 00:15:43.276 "data_size": 0 00:15:43.276 } 00:15:43.276 ] 00:15:43.276 }' 00:15:43.276 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.276 11:57:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.207 11:57:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:44.207 [2024-07-21 11:57:43.046940] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:44.207 [2024-07-21 11:57:43.047365] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:15:44.207 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:44.464 [2024-07-21 11:57:43.270978] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:44.464 [2024-07-21 11:57:43.271434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:44.464 [2024-07-21 11:57:43.271568] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:44.464 [2024-07-21 11:57:43.271648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:44.464 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.771 [2024-07-21 11:57:43.493262] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.771 BaseBdev1 00:15:44.771 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:44.771 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:15:44.771 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:44.771 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:44.771 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:44.771 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:44.771 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.027 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:45.284 [ 00:15:45.284 { 00:15:45.284 "name": "BaseBdev1", 00:15:45.284 "aliases": [ 00:15:45.284 "63b884ae-dc70-433e-876b-36fd67b35c53" 00:15:45.284 ], 00:15:45.284 "product_name": "Malloc disk", 00:15:45.284 "block_size": 512, 00:15:45.284 "num_blocks": 65536, 00:15:45.284 "uuid": "63b884ae-dc70-433e-876b-36fd67b35c53", 00:15:45.284 "assigned_rate_limits": { 00:15:45.284 "rw_ios_per_sec": 0, 00:15:45.284 "rw_mbytes_per_sec": 0, 00:15:45.284 "r_mbytes_per_sec": 0, 00:15:45.284 "w_mbytes_per_sec": 0 00:15:45.285 }, 00:15:45.285 "claimed": true, 00:15:45.285 "claim_type": "exclusive_write", 00:15:45.285 "zoned": false, 00:15:45.285 "supported_io_types": { 00:15:45.285 "read": true, 00:15:45.285 "write": true, 00:15:45.285 "unmap": true, 00:15:45.285 "write_zeroes": true, 00:15:45.285 "flush": true, 00:15:45.285 "reset": true, 00:15:45.285 "compare": false, 00:15:45.285 "compare_and_write": false, 00:15:45.285 "abort": true, 00:15:45.285 "nvme_admin": false, 00:15:45.285 "nvme_io": false 00:15:45.285 }, 00:15:45.285 "memory_domains": [ 00:15:45.285 { 00:15:45.285 "dma_device_id": "system", 00:15:45.285 "dma_device_type": 1 00:15:45.285 }, 00:15:45.285 { 00:15:45.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.285 "dma_device_type": 2 00:15:45.285 } 00:15:45.285 ], 00:15:45.285 "driver_specific": {} 00:15:45.285 } 00:15:45.285 ] 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.285 11:57:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.542 11:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:45.542 "name": "Existed_Raid", 00:15:45.542 "uuid": "ff85c88e-32c9-4965-8393-5f90e42d8697", 00:15:45.542 "strip_size_kb": 0, 00:15:45.542 "state": "configuring", 00:15:45.542 "raid_level": "raid1", 00:15:45.542 "superblock": true, 00:15:45.542 "num_base_bdevs": 2, 00:15:45.542 "num_base_bdevs_discovered": 1, 00:15:45.542 "num_base_bdevs_operational": 2, 00:15:45.542 "base_bdevs_list": [ 00:15:45.542 { 00:15:45.542 "name": "BaseBdev1", 00:15:45.542 "uuid": "63b884ae-dc70-433e-876b-36fd67b35c53", 00:15:45.542 "is_configured": true, 00:15:45.542 "data_offset": 2048, 00:15:45.542 "data_size": 63488 00:15:45.542 }, 00:15:45.542 { 00:15:45.542 "name": "BaseBdev2", 00:15:45.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.542 "is_configured": false, 00:15:45.542 "data_offset": 0, 00:15:45.542 "data_size": 0 00:15:45.542 } 00:15:45.542 ] 00:15:45.542 }' 00:15:45.542 11:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:45.542 11:57:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.107 11:57:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:46.365 [2024-07-21 11:57:45.041753] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:46.365 [2024-07-21 11:57:45.042199] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:46.365 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:46.623 [2024-07-21 11:57:45.309882] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:46.623 [2024-07-21 11:57:45.312533] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:46.623 [2024-07-21 11:57:45.312762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.623 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.882 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.882 "name": "Existed_Raid", 00:15:46.882 "uuid": "b3d59bd7-e4df-4615-a065-9b36f7f1defe", 00:15:46.882 "strip_size_kb": 0, 00:15:46.882 "state": "configuring", 00:15:46.882 "raid_level": "raid1", 00:15:46.882 "superblock": true, 00:15:46.882 "num_base_bdevs": 2, 00:15:46.882 "num_base_bdevs_discovered": 1, 00:15:46.882 "num_base_bdevs_operational": 2, 00:15:46.882 "base_bdevs_list": [ 00:15:46.882 { 00:15:46.882 "name": "BaseBdev1", 00:15:46.882 "uuid": "63b884ae-dc70-433e-876b-36fd67b35c53", 00:15:46.882 "is_configured": true, 00:15:46.882 "data_offset": 2048, 00:15:46.882 "data_size": 63488 00:15:46.882 }, 00:15:46.882 { 00:15:46.882 "name": "BaseBdev2", 00:15:46.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.882 "is_configured": false, 00:15:46.882 "data_offset": 0, 00:15:46.882 "data_size": 0 00:15:46.882 } 00:15:46.882 ] 00:15:46.882 }' 00:15:46.882 11:57:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.882 11:57:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.448 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:47.722 [2024-07-21 11:57:46.507987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:47.722 [2024-07-21 11:57:46.508749] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:15:47.722 [2024-07-21 11:57:46.508931] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:47.722 [2024-07-21 11:57:46.509179] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:47.722 [2024-07-21 11:57:46.509956] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:15:47.722 [2024-07-21 11:57:46.510130] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:15:47.722 [2024-07-21 11:57:46.510445] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.722 BaseBdev2 00:15:47.722 11:57:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:47.722 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:15:47.722 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:47.722 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:15:47.722 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:47.722 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:47.722 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:47.980 11:57:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:48.238 [ 00:15:48.238 { 00:15:48.238 "name": "BaseBdev2", 00:15:48.238 "aliases": [ 00:15:48.238 "3c1bba0b-c8ee-4b3f-adbd-5074ba7c1b0c" 00:15:48.238 ], 00:15:48.238 "product_name": "Malloc disk", 00:15:48.238 "block_size": 512, 00:15:48.238 "num_blocks": 65536, 00:15:48.238 "uuid": "3c1bba0b-c8ee-4b3f-adbd-5074ba7c1b0c", 00:15:48.239 "assigned_rate_limits": { 00:15:48.239 "rw_ios_per_sec": 0, 00:15:48.239 "rw_mbytes_per_sec": 0, 00:15:48.239 "r_mbytes_per_sec": 0, 00:15:48.239 "w_mbytes_per_sec": 0 00:15:48.239 }, 00:15:48.239 "claimed": true, 00:15:48.239 "claim_type": "exclusive_write", 00:15:48.239 "zoned": false, 00:15:48.239 "supported_io_types": { 00:15:48.239 "read": true, 00:15:48.239 "write": true, 00:15:48.239 "unmap": true, 00:15:48.239 "write_zeroes": true, 00:15:48.239 "flush": true, 00:15:48.239 "reset": true, 00:15:48.239 "compare": false, 00:15:48.239 "compare_and_write": false, 00:15:48.239 "abort": true, 00:15:48.239 "nvme_admin": false, 00:15:48.239 "nvme_io": false 00:15:48.239 }, 00:15:48.239 "memory_domains": [ 00:15:48.239 { 00:15:48.239 "dma_device_id": "system", 00:15:48.239 "dma_device_type": 1 00:15:48.239 }, 00:15:48.239 { 00:15:48.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.239 "dma_device_type": 2 00:15:48.239 } 00:15:48.239 ], 00:15:48.239 "driver_specific": {} 00:15:48.239 } 00:15:48.239 ] 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.239 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:48.496 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:48.496 "name": "Existed_Raid", 00:15:48.496 "uuid": "b3d59bd7-e4df-4615-a065-9b36f7f1defe", 00:15:48.496 "strip_size_kb": 0, 00:15:48.496 "state": "online", 00:15:48.496 "raid_level": "raid1", 00:15:48.496 "superblock": true, 00:15:48.496 "num_base_bdevs": 2, 00:15:48.496 "num_base_bdevs_discovered": 2, 00:15:48.496 "num_base_bdevs_operational": 2, 00:15:48.496 "base_bdevs_list": [ 00:15:48.496 { 00:15:48.496 "name": "BaseBdev1", 00:15:48.496 "uuid": "63b884ae-dc70-433e-876b-36fd67b35c53", 00:15:48.496 "is_configured": true, 00:15:48.496 "data_offset": 2048, 00:15:48.496 "data_size": 63488 00:15:48.496 }, 00:15:48.497 { 00:15:48.497 "name": "BaseBdev2", 00:15:48.497 "uuid": "3c1bba0b-c8ee-4b3f-adbd-5074ba7c1b0c", 00:15:48.497 "is_configured": true, 00:15:48.497 "data_offset": 2048, 00:15:48.497 "data_size": 63488 00:15:48.497 } 00:15:48.497 ] 00:15:48.497 }' 00:15:48.497 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:48.497 11:57:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:49.062 11:57:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:49.321 [2024-07-21 11:57:48.168904] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.579 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:49.579 "name": "Existed_Raid", 00:15:49.579 "aliases": [ 00:15:49.579 "b3d59bd7-e4df-4615-a065-9b36f7f1defe" 00:15:49.579 ], 00:15:49.579 "product_name": "Raid Volume", 00:15:49.579 "block_size": 512, 00:15:49.579 "num_blocks": 63488, 00:15:49.579 "uuid": "b3d59bd7-e4df-4615-a065-9b36f7f1defe", 00:15:49.579 "assigned_rate_limits": { 00:15:49.579 "rw_ios_per_sec": 0, 00:15:49.579 "rw_mbytes_per_sec": 0, 00:15:49.579 "r_mbytes_per_sec": 0, 00:15:49.579 "w_mbytes_per_sec": 0 00:15:49.579 }, 00:15:49.579 "claimed": false, 00:15:49.579 "zoned": false, 00:15:49.579 "supported_io_types": { 00:15:49.579 "read": true, 00:15:49.579 "write": true, 00:15:49.579 "unmap": false, 00:15:49.579 "write_zeroes": true, 00:15:49.579 "flush": false, 00:15:49.579 "reset": true, 00:15:49.579 "compare": false, 00:15:49.579 "compare_and_write": false, 00:15:49.579 "abort": false, 00:15:49.579 "nvme_admin": false, 00:15:49.579 "nvme_io": false 00:15:49.579 }, 00:15:49.579 "memory_domains": [ 00:15:49.579 { 00:15:49.579 "dma_device_id": "system", 00:15:49.579 "dma_device_type": 1 00:15:49.579 }, 00:15:49.579 { 00:15:49.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.579 "dma_device_type": 2 00:15:49.579 }, 00:15:49.579 { 00:15:49.579 "dma_device_id": "system", 00:15:49.579 "dma_device_type": 1 00:15:49.579 }, 00:15:49.579 { 00:15:49.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.579 "dma_device_type": 2 00:15:49.579 } 00:15:49.579 ], 00:15:49.579 "driver_specific": { 00:15:49.579 "raid": { 00:15:49.579 "uuid": "b3d59bd7-e4df-4615-a065-9b36f7f1defe", 00:15:49.579 "strip_size_kb": 0, 00:15:49.579 "state": "online", 00:15:49.579 "raid_level": "raid1", 00:15:49.579 "superblock": true, 00:15:49.579 "num_base_bdevs": 2, 00:15:49.579 "num_base_bdevs_discovered": 2, 00:15:49.579 "num_base_bdevs_operational": 2, 00:15:49.579 "base_bdevs_list": [ 00:15:49.579 { 00:15:49.579 "name": "BaseBdev1", 00:15:49.579 "uuid": "63b884ae-dc70-433e-876b-36fd67b35c53", 00:15:49.579 "is_configured": true, 00:15:49.579 "data_offset": 2048, 00:15:49.579 "data_size": 63488 00:15:49.579 }, 00:15:49.579 { 00:15:49.579 "name": "BaseBdev2", 00:15:49.579 "uuid": "3c1bba0b-c8ee-4b3f-adbd-5074ba7c1b0c", 00:15:49.579 "is_configured": true, 00:15:49.579 "data_offset": 2048, 00:15:49.579 "data_size": 63488 00:15:49.579 } 00:15:49.579 ] 00:15:49.579 } 00:15:49.579 } 00:15:49.579 }' 00:15:49.579 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:49.579 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:49.579 BaseBdev2' 00:15:49.579 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:49.579 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:49.579 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:49.837 "name": "BaseBdev1", 00:15:49.837 "aliases": [ 00:15:49.837 "63b884ae-dc70-433e-876b-36fd67b35c53" 00:15:49.837 ], 00:15:49.837 "product_name": "Malloc disk", 00:15:49.837 "block_size": 512, 00:15:49.837 "num_blocks": 65536, 00:15:49.837 "uuid": "63b884ae-dc70-433e-876b-36fd67b35c53", 00:15:49.837 "assigned_rate_limits": { 00:15:49.837 "rw_ios_per_sec": 0, 00:15:49.837 "rw_mbytes_per_sec": 0, 00:15:49.837 "r_mbytes_per_sec": 0, 00:15:49.837 "w_mbytes_per_sec": 0 00:15:49.837 }, 00:15:49.837 "claimed": true, 00:15:49.837 "claim_type": "exclusive_write", 00:15:49.837 "zoned": false, 00:15:49.837 "supported_io_types": { 00:15:49.837 "read": true, 00:15:49.837 "write": true, 00:15:49.837 "unmap": true, 00:15:49.837 "write_zeroes": true, 00:15:49.837 "flush": true, 00:15:49.837 "reset": true, 00:15:49.837 "compare": false, 00:15:49.837 "compare_and_write": false, 00:15:49.837 "abort": true, 00:15:49.837 "nvme_admin": false, 00:15:49.837 "nvme_io": false 00:15:49.837 }, 00:15:49.837 "memory_domains": [ 00:15:49.837 { 00:15:49.837 "dma_device_id": "system", 00:15:49.837 "dma_device_type": 1 00:15:49.837 }, 00:15:49.837 { 00:15:49.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:49.837 "dma_device_type": 2 00:15:49.837 } 00:15:49.837 ], 00:15:49.837 "driver_specific": {} 00:15:49.837 }' 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:49.837 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:50.094 11:57:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:50.353 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:50.353 "name": "BaseBdev2", 00:15:50.353 "aliases": [ 00:15:50.353 "3c1bba0b-c8ee-4b3f-adbd-5074ba7c1b0c" 00:15:50.353 ], 00:15:50.353 "product_name": "Malloc disk", 00:15:50.353 "block_size": 512, 00:15:50.353 "num_blocks": 65536, 00:15:50.353 "uuid": "3c1bba0b-c8ee-4b3f-adbd-5074ba7c1b0c", 00:15:50.353 "assigned_rate_limits": { 00:15:50.353 "rw_ios_per_sec": 0, 00:15:50.353 "rw_mbytes_per_sec": 0, 00:15:50.353 "r_mbytes_per_sec": 0, 00:15:50.353 "w_mbytes_per_sec": 0 00:15:50.353 }, 00:15:50.353 "claimed": true, 00:15:50.353 "claim_type": "exclusive_write", 00:15:50.353 "zoned": false, 00:15:50.353 "supported_io_types": { 00:15:50.353 "read": true, 00:15:50.353 "write": true, 00:15:50.353 "unmap": true, 00:15:50.353 "write_zeroes": true, 00:15:50.353 "flush": true, 00:15:50.353 "reset": true, 00:15:50.353 "compare": false, 00:15:50.353 "compare_and_write": false, 00:15:50.353 "abort": true, 00:15:50.353 "nvme_admin": false, 00:15:50.353 "nvme_io": false 00:15:50.353 }, 00:15:50.353 "memory_domains": [ 00:15:50.353 { 00:15:50.353 "dma_device_id": "system", 00:15:50.353 "dma_device_type": 1 00:15:50.353 }, 00:15:50.353 { 00:15:50.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:50.353 "dma_device_type": 2 00:15:50.353 } 00:15:50.353 ], 00:15:50.353 "driver_specific": {} 00:15:50.353 }' 00:15:50.353 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.353 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:50.353 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:50.353 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:50.611 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:50.867 [2024-07-21 11:57:49.729081] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.125 11:57:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.383 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.383 "name": "Existed_Raid", 00:15:51.383 "uuid": "b3d59bd7-e4df-4615-a065-9b36f7f1defe", 00:15:51.383 "strip_size_kb": 0, 00:15:51.383 "state": "online", 00:15:51.383 "raid_level": "raid1", 00:15:51.383 "superblock": true, 00:15:51.383 "num_base_bdevs": 2, 00:15:51.383 "num_base_bdevs_discovered": 1, 00:15:51.383 "num_base_bdevs_operational": 1, 00:15:51.383 "base_bdevs_list": [ 00:15:51.383 { 00:15:51.383 "name": null, 00:15:51.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.383 "is_configured": false, 00:15:51.383 "data_offset": 2048, 00:15:51.383 "data_size": 63488 00:15:51.383 }, 00:15:51.383 { 00:15:51.383 "name": "BaseBdev2", 00:15:51.383 "uuid": "3c1bba0b-c8ee-4b3f-adbd-5074ba7c1b0c", 00:15:51.383 "is_configured": true, 00:15:51.383 "data_offset": 2048, 00:15:51.383 "data_size": 63488 00:15:51.383 } 00:15:51.383 ] 00:15:51.383 }' 00:15:51.383 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.383 11:57:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.954 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:51.954 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:51.954 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.954 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:52.212 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:52.212 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:52.212 11:57:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:52.470 [2024-07-21 11:57:51.185685] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:52.470 [2024-07-21 11:57:51.186894] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:52.470 [2024-07-21 11:57:51.199379] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:52.470 [2024-07-21 11:57:51.199713] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:52.470 [2024-07-21 11:57:51.199880] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:15:52.470 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:52.470 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:52.470 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.470 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 134538 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 134538 ']' 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 134538 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134538 00:15:52.727 killing process with pid 134538 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134538' 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 134538 00:15:52.727 [2024-07-21 11:57:51.501415] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:52.727 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 134538 00:15:52.727 [2024-07-21 11:57:51.501505] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:52.985 11:57:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:52.985 ************************************ 00:15:52.985 END TEST raid_state_function_test_sb 00:15:52.985 ************************************ 00:15:52.985 00:15:52.985 real 0m11.187s 00:15:52.985 user 0m20.487s 00:15:52.985 sys 0m1.483s 00:15:52.985 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:52.985 11:57:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.985 11:57:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:52.985 11:57:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:15:52.985 11:57:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:52.985 11:57:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:52.985 ************************************ 00:15:52.985 START TEST raid_superblock_test 00:15:52.985 ************************************ 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=134913 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 134913 /var/tmp/spdk-raid.sock 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 134913 ']' 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:52.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:52.985 11:57:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.243 [2024-07-21 11:57:51.870902] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:53.243 [2024-07-21 11:57:51.871391] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134913 ] 00:15:53.243 [2024-07-21 11:57:52.033443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.501 [2024-07-21 11:57:52.119385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.501 [2024-07-21 11:57:52.175356] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.066 11:57:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:54.327 malloc1 00:15:54.327 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.586 [2024-07-21 11:57:53.274351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.586 [2024-07-21 11:57:53.274768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.586 [2024-07-21 11:57:53.274946] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:15:54.586 [2024-07-21 11:57:53.275116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.586 [2024-07-21 11:57:53.277801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.586 [2024-07-21 11:57:53.278006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.586 pt1 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:54.586 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:54.843 malloc2 00:15:54.843 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:55.101 [2024-07-21 11:57:53.741199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:55.101 [2024-07-21 11:57:53.741542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:55.101 [2024-07-21 11:57:53.741739] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:15:55.101 [2024-07-21 11:57:53.741911] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:55.101 [2024-07-21 11:57:53.744697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:55.101 [2024-07-21 11:57:53.744893] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:55.101 pt2 00:15:55.101 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:55.101 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:55.101 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:55.359 [2024-07-21 11:57:53.973372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:55.359 [2024-07-21 11:57:53.975764] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:55.359 [2024-07-21 11:57:53.976176] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:15:55.359 [2024-07-21 11:57:53.976358] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.359 [2024-07-21 11:57:53.976585] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:15:55.359 [2024-07-21 11:57:53.977153] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:15:55.359 [2024-07-21 11:57:53.977323] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:15:55.359 [2024-07-21 11:57:53.977670] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.359 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:55.359 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.360 11:57:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.617 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.617 "name": "raid_bdev1", 00:15:55.617 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:15:55.617 "strip_size_kb": 0, 00:15:55.617 "state": "online", 00:15:55.617 "raid_level": "raid1", 00:15:55.617 "superblock": true, 00:15:55.617 "num_base_bdevs": 2, 00:15:55.617 "num_base_bdevs_discovered": 2, 00:15:55.617 "num_base_bdevs_operational": 2, 00:15:55.617 "base_bdevs_list": [ 00:15:55.617 { 00:15:55.617 "name": "pt1", 00:15:55.617 "uuid": "1fec61cc-2f19-5eed-9943-80cc6b1061c2", 00:15:55.617 "is_configured": true, 00:15:55.617 "data_offset": 2048, 00:15:55.617 "data_size": 63488 00:15:55.617 }, 00:15:55.617 { 00:15:55.617 "name": "pt2", 00:15:55.617 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:15:55.617 "is_configured": true, 00:15:55.617 "data_offset": 2048, 00:15:55.617 "data_size": 63488 00:15:55.617 } 00:15:55.617 ] 00:15:55.617 }' 00:15:55.617 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.617 11:57:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:56.182 11:57:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:56.440 [2024-07-21 11:57:55.102082] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.440 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:56.440 "name": "raid_bdev1", 00:15:56.440 "aliases": [ 00:15:56.440 "0bc6002d-448d-4c57-817e-a69b1463c76f" 00:15:56.440 ], 00:15:56.440 "product_name": "Raid Volume", 00:15:56.440 "block_size": 512, 00:15:56.440 "num_blocks": 63488, 00:15:56.440 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:15:56.440 "assigned_rate_limits": { 00:15:56.440 "rw_ios_per_sec": 0, 00:15:56.440 "rw_mbytes_per_sec": 0, 00:15:56.440 "r_mbytes_per_sec": 0, 00:15:56.440 "w_mbytes_per_sec": 0 00:15:56.440 }, 00:15:56.440 "claimed": false, 00:15:56.440 "zoned": false, 00:15:56.440 "supported_io_types": { 00:15:56.440 "read": true, 00:15:56.440 "write": true, 00:15:56.440 "unmap": false, 00:15:56.440 "write_zeroes": true, 00:15:56.440 "flush": false, 00:15:56.440 "reset": true, 00:15:56.440 "compare": false, 00:15:56.440 "compare_and_write": false, 00:15:56.440 "abort": false, 00:15:56.440 "nvme_admin": false, 00:15:56.440 "nvme_io": false 00:15:56.440 }, 00:15:56.440 "memory_domains": [ 00:15:56.440 { 00:15:56.440 "dma_device_id": "system", 00:15:56.440 "dma_device_type": 1 00:15:56.440 }, 00:15:56.440 { 00:15:56.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.440 "dma_device_type": 2 00:15:56.440 }, 00:15:56.440 { 00:15:56.440 "dma_device_id": "system", 00:15:56.440 "dma_device_type": 1 00:15:56.440 }, 00:15:56.440 { 00:15:56.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.440 "dma_device_type": 2 00:15:56.440 } 00:15:56.440 ], 00:15:56.440 "driver_specific": { 00:15:56.440 "raid": { 00:15:56.440 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:15:56.440 "strip_size_kb": 0, 00:15:56.440 "state": "online", 00:15:56.440 "raid_level": "raid1", 00:15:56.440 "superblock": true, 00:15:56.440 "num_base_bdevs": 2, 00:15:56.440 "num_base_bdevs_discovered": 2, 00:15:56.440 "num_base_bdevs_operational": 2, 00:15:56.440 "base_bdevs_list": [ 00:15:56.440 { 00:15:56.440 "name": "pt1", 00:15:56.440 "uuid": "1fec61cc-2f19-5eed-9943-80cc6b1061c2", 00:15:56.440 "is_configured": true, 00:15:56.440 "data_offset": 2048, 00:15:56.440 "data_size": 63488 00:15:56.440 }, 00:15:56.440 { 00:15:56.440 "name": "pt2", 00:15:56.440 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:15:56.440 "is_configured": true, 00:15:56.440 "data_offset": 2048, 00:15:56.440 "data_size": 63488 00:15:56.440 } 00:15:56.440 ] 00:15:56.440 } 00:15:56.440 } 00:15:56.440 }' 00:15:56.440 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.440 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:56.440 pt2' 00:15:56.440 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.440 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:56.440 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.704 "name": "pt1", 00:15:56.704 "aliases": [ 00:15:56.704 "1fec61cc-2f19-5eed-9943-80cc6b1061c2" 00:15:56.704 ], 00:15:56.704 "product_name": "passthru", 00:15:56.704 "block_size": 512, 00:15:56.704 "num_blocks": 65536, 00:15:56.704 "uuid": "1fec61cc-2f19-5eed-9943-80cc6b1061c2", 00:15:56.704 "assigned_rate_limits": { 00:15:56.704 "rw_ios_per_sec": 0, 00:15:56.704 "rw_mbytes_per_sec": 0, 00:15:56.704 "r_mbytes_per_sec": 0, 00:15:56.704 "w_mbytes_per_sec": 0 00:15:56.704 }, 00:15:56.704 "claimed": true, 00:15:56.704 "claim_type": "exclusive_write", 00:15:56.704 "zoned": false, 00:15:56.704 "supported_io_types": { 00:15:56.704 "read": true, 00:15:56.704 "write": true, 00:15:56.704 "unmap": true, 00:15:56.704 "write_zeroes": true, 00:15:56.704 "flush": true, 00:15:56.704 "reset": true, 00:15:56.704 "compare": false, 00:15:56.704 "compare_and_write": false, 00:15:56.704 "abort": true, 00:15:56.704 "nvme_admin": false, 00:15:56.704 "nvme_io": false 00:15:56.704 }, 00:15:56.704 "memory_domains": [ 00:15:56.704 { 00:15:56.704 "dma_device_id": "system", 00:15:56.704 "dma_device_type": 1 00:15:56.704 }, 00:15:56.704 { 00:15:56.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.704 "dma_device_type": 2 00:15:56.704 } 00:15:56.704 ], 00:15:56.704 "driver_specific": { 00:15:56.704 "passthru": { 00:15:56.704 "name": "pt1", 00:15:56.704 "base_bdev_name": "malloc1" 00:15:56.704 } 00:15:56.704 } 00:15:56.704 }' 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.704 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.961 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:57.219 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.219 "name": "pt2", 00:15:57.219 "aliases": [ 00:15:57.219 "e6237b5a-468a-5e68-8e61-64ec7d6fad09" 00:15:57.219 ], 00:15:57.219 "product_name": "passthru", 00:15:57.219 "block_size": 512, 00:15:57.219 "num_blocks": 65536, 00:15:57.219 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:15:57.219 "assigned_rate_limits": { 00:15:57.219 "rw_ios_per_sec": 0, 00:15:57.219 "rw_mbytes_per_sec": 0, 00:15:57.219 "r_mbytes_per_sec": 0, 00:15:57.219 "w_mbytes_per_sec": 0 00:15:57.219 }, 00:15:57.219 "claimed": true, 00:15:57.219 "claim_type": "exclusive_write", 00:15:57.219 "zoned": false, 00:15:57.219 "supported_io_types": { 00:15:57.219 "read": true, 00:15:57.219 "write": true, 00:15:57.219 "unmap": true, 00:15:57.219 "write_zeroes": true, 00:15:57.219 "flush": true, 00:15:57.219 "reset": true, 00:15:57.219 "compare": false, 00:15:57.219 "compare_and_write": false, 00:15:57.219 "abort": true, 00:15:57.219 "nvme_admin": false, 00:15:57.219 "nvme_io": false 00:15:57.219 }, 00:15:57.219 "memory_domains": [ 00:15:57.219 { 00:15:57.219 "dma_device_id": "system", 00:15:57.219 "dma_device_type": 1 00:15:57.219 }, 00:15:57.219 { 00:15:57.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.219 "dma_device_type": 2 00:15:57.219 } 00:15:57.219 ], 00:15:57.219 "driver_specific": { 00:15:57.219 "passthru": { 00:15:57.219 "name": "pt2", 00:15:57.219 "base_bdev_name": "malloc2" 00:15:57.219 } 00:15:57.219 } 00:15:57.219 }' 00:15:57.219 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.219 11:57:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.219 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.219 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.219 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:57.477 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:57.735 [2024-07-21 11:57:56.518308] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.735 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0bc6002d-448d-4c57-817e-a69b1463c76f 00:15:57.735 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0bc6002d-448d-4c57-817e-a69b1463c76f ']' 00:15:57.735 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:57.992 [2024-07-21 11:57:56.798182] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:57.992 [2024-07-21 11:57:56.798394] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.992 [2024-07-21 11:57:56.798670] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.992 [2024-07-21 11:57:56.798870] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.992 [2024-07-21 11:57:56.798984] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:15:57.992 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.992 11:57:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:58.249 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:58.249 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:58.249 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.249 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:58.506 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:58.506 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:58.764 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:58.764 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:59.023 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:59.281 [2024-07-21 11:57:57.974437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:59.281 [2024-07-21 11:57:57.977022] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:59.281 [2024-07-21 11:57:57.977264] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:59.281 [2024-07-21 11:57:57.977481] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:59.281 [2024-07-21 11:57:57.977666] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:59.281 [2024-07-21 11:57:57.977786] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:15:59.281 request: 00:15:59.281 { 00:15:59.281 "name": "raid_bdev1", 00:15:59.281 "raid_level": "raid1", 00:15:59.281 "base_bdevs": [ 00:15:59.281 "malloc1", 00:15:59.281 "malloc2" 00:15:59.281 ], 00:15:59.281 "superblock": false, 00:15:59.281 "method": "bdev_raid_create", 00:15:59.281 "req_id": 1 00:15:59.281 } 00:15:59.281 Got JSON-RPC error response 00:15:59.281 response: 00:15:59.281 { 00:15:59.281 "code": -17, 00:15:59.281 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:59.281 } 00:15:59.281 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:59.281 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:59.281 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:59.281 11:57:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:59.281 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.281 11:57:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:59.538 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:59.538 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:59.538 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:59.796 [2024-07-21 11:57:58.426671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:59.796 [2024-07-21 11:57:58.427049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:59.796 [2024-07-21 11:57:58.427218] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:59.796 [2024-07-21 11:57:58.427380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:59.796 [2024-07-21 11:57:58.429931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:59.796 [2024-07-21 11:57:58.430127] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:59.796 [2024-07-21 11:57:58.430360] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:59.796 [2024-07-21 11:57:58.430544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:59.796 pt1 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.796 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.054 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:00.054 "name": "raid_bdev1", 00:16:00.054 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:16:00.054 "strip_size_kb": 0, 00:16:00.054 "state": "configuring", 00:16:00.054 "raid_level": "raid1", 00:16:00.054 "superblock": true, 00:16:00.054 "num_base_bdevs": 2, 00:16:00.054 "num_base_bdevs_discovered": 1, 00:16:00.054 "num_base_bdevs_operational": 2, 00:16:00.054 "base_bdevs_list": [ 00:16:00.054 { 00:16:00.054 "name": "pt1", 00:16:00.054 "uuid": "1fec61cc-2f19-5eed-9943-80cc6b1061c2", 00:16:00.054 "is_configured": true, 00:16:00.054 "data_offset": 2048, 00:16:00.054 "data_size": 63488 00:16:00.054 }, 00:16:00.054 { 00:16:00.054 "name": null, 00:16:00.054 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:16:00.054 "is_configured": false, 00:16:00.054 "data_offset": 2048, 00:16:00.054 "data_size": 63488 00:16:00.054 } 00:16:00.054 ] 00:16:00.054 }' 00:16:00.054 11:57:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:00.054 11:57:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.620 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:16:00.620 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:00.620 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:00.620 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:00.879 [2024-07-21 11:57:59.559189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:00.879 [2024-07-21 11:57:59.559515] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:00.879 [2024-07-21 11:57:59.559704] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:00.879 [2024-07-21 11:57:59.559887] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:00.879 [2024-07-21 11:57:59.560463] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:00.879 [2024-07-21 11:57:59.560639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:00.879 [2024-07-21 11:57:59.560843] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:00.879 [2024-07-21 11:57:59.560915] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:00.879 [2024-07-21 11:57:59.561188] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:16:00.879 [2024-07-21 11:57:59.561314] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:00.879 [2024-07-21 11:57:59.561432] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:16:00.879 [2024-07-21 11:57:59.561978] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:16:00.879 [2024-07-21 11:57:59.562122] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:16:00.879 [2024-07-21 11:57:59.562344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.879 pt2 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.879 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.138 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.138 "name": "raid_bdev1", 00:16:01.138 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:16:01.138 "strip_size_kb": 0, 00:16:01.138 "state": "online", 00:16:01.138 "raid_level": "raid1", 00:16:01.138 "superblock": true, 00:16:01.138 "num_base_bdevs": 2, 00:16:01.138 "num_base_bdevs_discovered": 2, 00:16:01.138 "num_base_bdevs_operational": 2, 00:16:01.138 "base_bdevs_list": [ 00:16:01.138 { 00:16:01.138 "name": "pt1", 00:16:01.138 "uuid": "1fec61cc-2f19-5eed-9943-80cc6b1061c2", 00:16:01.138 "is_configured": true, 00:16:01.138 "data_offset": 2048, 00:16:01.138 "data_size": 63488 00:16:01.138 }, 00:16:01.138 { 00:16:01.138 "name": "pt2", 00:16:01.138 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:16:01.138 "is_configured": true, 00:16:01.138 "data_offset": 2048, 00:16:01.138 "data_size": 63488 00:16:01.138 } 00:16:01.138 ] 00:16:01.138 }' 00:16:01.138 11:57:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.138 11:57:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:01.705 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:01.973 [2024-07-21 11:58:00.751639] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:01.973 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:01.973 "name": "raid_bdev1", 00:16:01.973 "aliases": [ 00:16:01.973 "0bc6002d-448d-4c57-817e-a69b1463c76f" 00:16:01.973 ], 00:16:01.973 "product_name": "Raid Volume", 00:16:01.973 "block_size": 512, 00:16:01.973 "num_blocks": 63488, 00:16:01.973 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:16:01.973 "assigned_rate_limits": { 00:16:01.973 "rw_ios_per_sec": 0, 00:16:01.973 "rw_mbytes_per_sec": 0, 00:16:01.973 "r_mbytes_per_sec": 0, 00:16:01.973 "w_mbytes_per_sec": 0 00:16:01.973 }, 00:16:01.973 "claimed": false, 00:16:01.973 "zoned": false, 00:16:01.973 "supported_io_types": { 00:16:01.973 "read": true, 00:16:01.973 "write": true, 00:16:01.973 "unmap": false, 00:16:01.973 "write_zeroes": true, 00:16:01.973 "flush": false, 00:16:01.973 "reset": true, 00:16:01.973 "compare": false, 00:16:01.973 "compare_and_write": false, 00:16:01.973 "abort": false, 00:16:01.973 "nvme_admin": false, 00:16:01.973 "nvme_io": false 00:16:01.973 }, 00:16:01.973 "memory_domains": [ 00:16:01.973 { 00:16:01.973 "dma_device_id": "system", 00:16:01.973 "dma_device_type": 1 00:16:01.973 }, 00:16:01.973 { 00:16:01.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.973 "dma_device_type": 2 00:16:01.973 }, 00:16:01.973 { 00:16:01.973 "dma_device_id": "system", 00:16:01.973 "dma_device_type": 1 00:16:01.973 }, 00:16:01.973 { 00:16:01.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.973 "dma_device_type": 2 00:16:01.973 } 00:16:01.973 ], 00:16:01.973 "driver_specific": { 00:16:01.973 "raid": { 00:16:01.973 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:16:01.973 "strip_size_kb": 0, 00:16:01.973 "state": "online", 00:16:01.973 "raid_level": "raid1", 00:16:01.973 "superblock": true, 00:16:01.973 "num_base_bdevs": 2, 00:16:01.973 "num_base_bdevs_discovered": 2, 00:16:01.973 "num_base_bdevs_operational": 2, 00:16:01.973 "base_bdevs_list": [ 00:16:01.973 { 00:16:01.973 "name": "pt1", 00:16:01.973 "uuid": "1fec61cc-2f19-5eed-9943-80cc6b1061c2", 00:16:01.973 "is_configured": true, 00:16:01.973 "data_offset": 2048, 00:16:01.973 "data_size": 63488 00:16:01.973 }, 00:16:01.973 { 00:16:01.973 "name": "pt2", 00:16:01.973 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:16:01.974 "is_configured": true, 00:16:01.974 "data_offset": 2048, 00:16:01.974 "data_size": 63488 00:16:01.974 } 00:16:01.974 ] 00:16:01.974 } 00:16:01.974 } 00:16:01.974 }' 00:16:01.974 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:01.974 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:01.974 pt2' 00:16:01.974 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:01.974 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:01.974 11:58:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:02.249 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:02.249 "name": "pt1", 00:16:02.249 "aliases": [ 00:16:02.249 "1fec61cc-2f19-5eed-9943-80cc6b1061c2" 00:16:02.249 ], 00:16:02.249 "product_name": "passthru", 00:16:02.249 "block_size": 512, 00:16:02.249 "num_blocks": 65536, 00:16:02.249 "uuid": "1fec61cc-2f19-5eed-9943-80cc6b1061c2", 00:16:02.249 "assigned_rate_limits": { 00:16:02.249 "rw_ios_per_sec": 0, 00:16:02.249 "rw_mbytes_per_sec": 0, 00:16:02.249 "r_mbytes_per_sec": 0, 00:16:02.249 "w_mbytes_per_sec": 0 00:16:02.249 }, 00:16:02.249 "claimed": true, 00:16:02.249 "claim_type": "exclusive_write", 00:16:02.249 "zoned": false, 00:16:02.249 "supported_io_types": { 00:16:02.249 "read": true, 00:16:02.249 "write": true, 00:16:02.249 "unmap": true, 00:16:02.249 "write_zeroes": true, 00:16:02.249 "flush": true, 00:16:02.249 "reset": true, 00:16:02.249 "compare": false, 00:16:02.249 "compare_and_write": false, 00:16:02.249 "abort": true, 00:16:02.249 "nvme_admin": false, 00:16:02.249 "nvme_io": false 00:16:02.249 }, 00:16:02.249 "memory_domains": [ 00:16:02.249 { 00:16:02.249 "dma_device_id": "system", 00:16:02.249 "dma_device_type": 1 00:16:02.249 }, 00:16:02.249 { 00:16:02.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.249 "dma_device_type": 2 00:16:02.249 } 00:16:02.249 ], 00:16:02.249 "driver_specific": { 00:16:02.249 "passthru": { 00:16:02.249 "name": "pt1", 00:16:02.249 "base_bdev_name": "malloc1" 00:16:02.249 } 00:16:02.249 } 00:16:02.249 }' 00:16:02.249 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.507 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:02.507 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:02.507 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.507 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:02.507 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:02.507 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.507 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:02.765 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:02.765 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.765 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:02.765 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:02.765 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:02.765 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:02.765 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:03.022 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:03.022 "name": "pt2", 00:16:03.022 "aliases": [ 00:16:03.022 "e6237b5a-468a-5e68-8e61-64ec7d6fad09" 00:16:03.022 ], 00:16:03.022 "product_name": "passthru", 00:16:03.022 "block_size": 512, 00:16:03.022 "num_blocks": 65536, 00:16:03.022 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:16:03.022 "assigned_rate_limits": { 00:16:03.022 "rw_ios_per_sec": 0, 00:16:03.022 "rw_mbytes_per_sec": 0, 00:16:03.022 "r_mbytes_per_sec": 0, 00:16:03.022 "w_mbytes_per_sec": 0 00:16:03.022 }, 00:16:03.022 "claimed": true, 00:16:03.022 "claim_type": "exclusive_write", 00:16:03.022 "zoned": false, 00:16:03.022 "supported_io_types": { 00:16:03.022 "read": true, 00:16:03.022 "write": true, 00:16:03.022 "unmap": true, 00:16:03.022 "write_zeroes": true, 00:16:03.022 "flush": true, 00:16:03.022 "reset": true, 00:16:03.022 "compare": false, 00:16:03.022 "compare_and_write": false, 00:16:03.022 "abort": true, 00:16:03.022 "nvme_admin": false, 00:16:03.022 "nvme_io": false 00:16:03.022 }, 00:16:03.022 "memory_domains": [ 00:16:03.022 { 00:16:03.022 "dma_device_id": "system", 00:16:03.022 "dma_device_type": 1 00:16:03.022 }, 00:16:03.022 { 00:16:03.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.022 "dma_device_type": 2 00:16:03.022 } 00:16:03.022 ], 00:16:03.022 "driver_specific": { 00:16:03.022 "passthru": { 00:16:03.022 "name": "pt2", 00:16:03.022 "base_bdev_name": "malloc2" 00:16:03.022 } 00:16:03.022 } 00:16:03.022 }' 00:16:03.022 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.022 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:03.022 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:03.022 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.022 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:03.279 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:03.279 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.279 11:58:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:03.279 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:03.279 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.279 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:03.279 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:03.279 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:03.279 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:03.537 [2024-07-21 11:58:02.320116] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.537 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0bc6002d-448d-4c57-817e-a69b1463c76f '!=' 0bc6002d-448d-4c57-817e-a69b1463c76f ']' 00:16:03.537 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:03.537 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:03.537 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:03.537 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:03.794 [2024-07-21 11:58:02.591966] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.794 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.052 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.052 "name": "raid_bdev1", 00:16:04.052 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:16:04.052 "strip_size_kb": 0, 00:16:04.052 "state": "online", 00:16:04.052 "raid_level": "raid1", 00:16:04.052 "superblock": true, 00:16:04.052 "num_base_bdevs": 2, 00:16:04.052 "num_base_bdevs_discovered": 1, 00:16:04.052 "num_base_bdevs_operational": 1, 00:16:04.052 "base_bdevs_list": [ 00:16:04.052 { 00:16:04.052 "name": null, 00:16:04.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.052 "is_configured": false, 00:16:04.052 "data_offset": 2048, 00:16:04.052 "data_size": 63488 00:16:04.052 }, 00:16:04.052 { 00:16:04.052 "name": "pt2", 00:16:04.052 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:16:04.052 "is_configured": true, 00:16:04.052 "data_offset": 2048, 00:16:04.052 "data_size": 63488 00:16:04.052 } 00:16:04.052 ] 00:16:04.052 }' 00:16:04.052 11:58:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.052 11:58:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.984 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:04.984 [2024-07-21 11:58:03.735021] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:04.984 [2024-07-21 11:58:03.735269] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.984 [2024-07-21 11:58:03.735490] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.984 [2024-07-21 11:58:03.735657] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:04.984 [2024-07-21 11:58:03.735768] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:16:04.984 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.984 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:05.242 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:05.242 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:05.242 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:05.242 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:05.242 11:58:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:05.499 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:05.499 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:05.499 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:05.499 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:05.499 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:16:05.499 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:05.756 [2024-07-21 11:58:04.451166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:05.756 [2024-07-21 11:58:04.451585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.756 [2024-07-21 11:58:04.451770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:05.756 [2024-07-21 11:58:04.451918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.756 [2024-07-21 11:58:04.454423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.756 [2024-07-21 11:58:04.454646] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:05.756 [2024-07-21 11:58:04.454903] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:05.756 [2024-07-21 11:58:04.455066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.756 [2024-07-21 11:58:04.455322] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:16:05.756 [2024-07-21 11:58:04.455444] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:05.756 [2024-07-21 11:58:04.455554] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:16:05.756 [2024-07-21 11:58:04.456044] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:16:05.756 [2024-07-21 11:58:04.456225] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:16:05.756 [2024-07-21 11:58:04.456492] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.756 pt2 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.756 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.013 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.013 "name": "raid_bdev1", 00:16:06.013 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:16:06.013 "strip_size_kb": 0, 00:16:06.013 "state": "online", 00:16:06.013 "raid_level": "raid1", 00:16:06.013 "superblock": true, 00:16:06.013 "num_base_bdevs": 2, 00:16:06.013 "num_base_bdevs_discovered": 1, 00:16:06.013 "num_base_bdevs_operational": 1, 00:16:06.013 "base_bdevs_list": [ 00:16:06.013 { 00:16:06.013 "name": null, 00:16:06.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.013 "is_configured": false, 00:16:06.013 "data_offset": 2048, 00:16:06.013 "data_size": 63488 00:16:06.014 }, 00:16:06.014 { 00:16:06.014 "name": "pt2", 00:16:06.014 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:16:06.014 "is_configured": true, 00:16:06.014 "data_offset": 2048, 00:16:06.014 "data_size": 63488 00:16:06.014 } 00:16:06.014 ] 00:16:06.014 }' 00:16:06.014 11:58:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.014 11:58:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.579 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:06.837 [2024-07-21 11:58:05.551634] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.837 [2024-07-21 11:58:05.551959] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.837 [2024-07-21 11:58:05.552157] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.837 [2024-07-21 11:58:05.552313] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.837 [2024-07-21 11:58:05.552418] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:16:06.837 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.837 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:07.094 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:07.094 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:07.094 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:16:07.094 11:58:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:07.352 [2024-07-21 11:58:06.047745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:07.352 [2024-07-21 11:58:06.048060] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:07.352 [2024-07-21 11:58:06.048244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:07.352 [2024-07-21 11:58:06.048379] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:07.352 [2024-07-21 11:58:06.051055] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:07.352 [2024-07-21 11:58:06.051235] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:07.352 [2024-07-21 11:58:06.051490] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:07.352 [2024-07-21 11:58:06.051626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:07.352 [2024-07-21 11:58:06.051942] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:07.352 [2024-07-21 11:58:06.052100] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:07.352 [2024-07-21 11:58:06.052247] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:16:07.352 [2024-07-21 11:58:06.052416] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:07.352 [2024-07-21 11:58:06.052649] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:16:07.352 [2024-07-21 11:58:06.052754] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:07.352 [2024-07-21 11:58:06.052879] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:07.352 [2024-07-21 11:58:06.053321] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:16:07.352 [2024-07-21 11:58:06.053448] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:16:07.352 [2024-07-21 11:58:06.053649] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.352 pt1 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.352 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.610 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.610 "name": "raid_bdev1", 00:16:07.610 "uuid": "0bc6002d-448d-4c57-817e-a69b1463c76f", 00:16:07.610 "strip_size_kb": 0, 00:16:07.610 "state": "online", 00:16:07.610 "raid_level": "raid1", 00:16:07.610 "superblock": true, 00:16:07.610 "num_base_bdevs": 2, 00:16:07.610 "num_base_bdevs_discovered": 1, 00:16:07.610 "num_base_bdevs_operational": 1, 00:16:07.610 "base_bdevs_list": [ 00:16:07.610 { 00:16:07.610 "name": null, 00:16:07.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.610 "is_configured": false, 00:16:07.610 "data_offset": 2048, 00:16:07.610 "data_size": 63488 00:16:07.610 }, 00:16:07.610 { 00:16:07.610 "name": "pt2", 00:16:07.610 "uuid": "e6237b5a-468a-5e68-8e61-64ec7d6fad09", 00:16:07.610 "is_configured": true, 00:16:07.610 "data_offset": 2048, 00:16:07.610 "data_size": 63488 00:16:07.610 } 00:16:07.610 ] 00:16:07.610 }' 00:16:07.610 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.610 11:58:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.176 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:08.176 11:58:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:08.434 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:08.434 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:08.435 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:08.693 [2024-07-21 11:58:07.420675] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 0bc6002d-448d-4c57-817e-a69b1463c76f '!=' 0bc6002d-448d-4c57-817e-a69b1463c76f ']' 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 134913 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 134913 ']' 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 134913 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 134913 00:16:08.693 killing process with pid 134913 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 134913' 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 134913 00:16:08.693 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 134913 00:16:08.693 [2024-07-21 11:58:07.466505] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:08.693 [2024-07-21 11:58:07.466627] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:08.693 [2024-07-21 11:58:07.466688] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:08.693 [2024-07-21 11:58:07.466716] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:16:08.693 [2024-07-21 11:58:07.489141] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.952 ************************************ 00:16:08.952 END TEST raid_superblock_test 00:16:08.952 ************************************ 00:16:08.952 11:58:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:08.952 00:16:08.952 real 0m15.921s 00:16:08.952 user 0m29.878s 00:16:08.952 sys 0m1.957s 00:16:08.952 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:08.952 11:58:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.952 11:58:07 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:16:08.952 11:58:07 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:08.952 11:58:07 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:08.952 11:58:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.952 ************************************ 00:16:08.952 START TEST raid_read_error_test 00:16:08.952 ************************************ 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 read 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.c7iDbWEpHr 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135446 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135446 /var/tmp/spdk-raid.sock 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:08.952 11:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 135446 ']' 00:16:08.953 11:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:08.953 11:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:08.953 11:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:08.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:08.953 11:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:08.953 11:58:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.254 [2024-07-21 11:58:07.863168] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:09.254 [2024-07-21 11:58:07.863663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135446 ] 00:16:09.254 [2024-07-21 11:58:08.034226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.512 [2024-07-21 11:58:08.119510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.512 [2024-07-21 11:58:08.173720] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:10.087 11:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:10.087 11:58:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:16:10.087 11:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:10.087 11:58:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:10.345 BaseBdev1_malloc 00:16:10.345 11:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:10.604 true 00:16:10.604 11:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:10.863 [2024-07-21 11:58:09.546493] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:10.863 [2024-07-21 11:58:09.546883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.863 [2024-07-21 11:58:09.547108] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:10.863 [2024-07-21 11:58:09.547312] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.863 [2024-07-21 11:58:09.550465] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.863 [2024-07-21 11:58:09.550690] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:10.863 BaseBdev1 00:16:10.863 11:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:10.863 11:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:11.121 BaseBdev2_malloc 00:16:11.121 11:58:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:11.379 true 00:16:11.379 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:11.638 [2024-07-21 11:58:10.306698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:11.638 [2024-07-21 11:58:10.307145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.638 [2024-07-21 11:58:10.307334] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:11.638 [2024-07-21 11:58:10.307496] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.638 [2024-07-21 11:58:10.310072] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.638 [2024-07-21 11:58:10.310265] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:11.638 BaseBdev2 00:16:11.638 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:11.896 [2024-07-21 11:58:10.530877] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.896 [2024-07-21 11:58:10.533583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:11.896 [2024-07-21 11:58:10.534060] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:16:11.896 [2024-07-21 11:58:10.534225] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:11.896 [2024-07-21 11:58:10.534491] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:11.896 [2024-07-21 11:58:10.535197] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:16:11.896 [2024-07-21 11:58:10.535358] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:16:11.896 [2024-07-21 11:58:10.535703] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.896 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.167 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:12.167 "name": "raid_bdev1", 00:16:12.167 "uuid": "f7e30f57-9c37-4935-98e1-362816590ffc", 00:16:12.167 "strip_size_kb": 0, 00:16:12.167 "state": "online", 00:16:12.167 "raid_level": "raid1", 00:16:12.167 "superblock": true, 00:16:12.167 "num_base_bdevs": 2, 00:16:12.167 "num_base_bdevs_discovered": 2, 00:16:12.167 "num_base_bdevs_operational": 2, 00:16:12.167 "base_bdevs_list": [ 00:16:12.167 { 00:16:12.167 "name": "BaseBdev1", 00:16:12.167 "uuid": "326e5ca9-9dd6-57e0-b634-25bfdc0689fa", 00:16:12.167 "is_configured": true, 00:16:12.167 "data_offset": 2048, 00:16:12.167 "data_size": 63488 00:16:12.167 }, 00:16:12.167 { 00:16:12.167 "name": "BaseBdev2", 00:16:12.167 "uuid": "f109c733-42ef-5318-9519-7084d98b4a68", 00:16:12.167 "is_configured": true, 00:16:12.167 "data_offset": 2048, 00:16:12.167 "data_size": 63488 00:16:12.167 } 00:16:12.167 ] 00:16:12.167 }' 00:16:12.167 11:58:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:12.167 11:58:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.734 11:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:12.734 11:58:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:12.734 [2024-07-21 11:58:11.484420] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:13.668 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.926 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.184 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.184 "name": "raid_bdev1", 00:16:14.184 "uuid": "f7e30f57-9c37-4935-98e1-362816590ffc", 00:16:14.184 "strip_size_kb": 0, 00:16:14.184 "state": "online", 00:16:14.184 "raid_level": "raid1", 00:16:14.184 "superblock": true, 00:16:14.184 "num_base_bdevs": 2, 00:16:14.184 "num_base_bdevs_discovered": 2, 00:16:14.184 "num_base_bdevs_operational": 2, 00:16:14.184 "base_bdevs_list": [ 00:16:14.184 { 00:16:14.184 "name": "BaseBdev1", 00:16:14.184 "uuid": "326e5ca9-9dd6-57e0-b634-25bfdc0689fa", 00:16:14.184 "is_configured": true, 00:16:14.184 "data_offset": 2048, 00:16:14.184 "data_size": 63488 00:16:14.184 }, 00:16:14.184 { 00:16:14.184 "name": "BaseBdev2", 00:16:14.184 "uuid": "f109c733-42ef-5318-9519-7084d98b4a68", 00:16:14.184 "is_configured": true, 00:16:14.184 "data_offset": 2048, 00:16:14.184 "data_size": 63488 00:16:14.184 } 00:16:14.184 ] 00:16:14.184 }' 00:16:14.184 11:58:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.184 11:58:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.749 11:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:15.007 [2024-07-21 11:58:13.856933] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:15.007 [2024-07-21 11:58:13.856982] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.007 [2024-07-21 11:58:13.859813] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.007 [2024-07-21 11:58:13.859926] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.007 [2024-07-21 11:58:13.860014] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:15.007 [2024-07-21 11:58:13.860027] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:16:15.007 0 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135446 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 135446 ']' 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 135446 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135446 00:16:15.264 killing process with pid 135446 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135446' 00:16:15.264 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 135446 00:16:15.265 11:58:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 135446 00:16:15.265 [2024-07-21 11:58:13.910076] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:15.265 [2024-07-21 11:58:13.926002] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.522 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.c7iDbWEpHr 00:16:15.522 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:15.522 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:15.522 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:15.522 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:15.522 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:15.522 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:15.523 11:58:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:15.523 00:16:15.523 real 0m6.401s 00:16:15.523 user 0m10.273s 00:16:15.523 sys 0m0.843s 00:16:15.523 11:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:15.523 ************************************ 00:16:15.523 END TEST raid_read_error_test 00:16:15.523 ************************************ 00:16:15.523 11:58:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.523 11:58:14 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:16:15.523 11:58:14 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:15.523 11:58:14 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:15.523 11:58:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.523 ************************************ 00:16:15.523 START TEST raid_write_error_test 00:16:15.523 ************************************ 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 2 write 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8ZGuPiRLK9 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=135633 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 135633 /var/tmp/spdk-raid.sock 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 135633 ']' 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:15.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:15.523 11:58:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.523 [2024-07-21 11:58:14.314129] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:15.523 [2024-07-21 11:58:14.314931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135633 ] 00:16:15.781 [2024-07-21 11:58:14.482172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.781 [2024-07-21 11:58:14.575988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.781 [2024-07-21 11:58:14.631016] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.713 11:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:16.713 11:58:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:16:16.713 11:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:16.713 11:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.713 BaseBdev1_malloc 00:16:16.713 11:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:16.970 true 00:16:16.970 11:58:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:17.228 [2024-07-21 11:58:16.004269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:17.228 [2024-07-21 11:58:16.004442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.228 [2024-07-21 11:58:16.004504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:17.228 [2024-07-21 11:58:16.004558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.228 [2024-07-21 11:58:16.007427] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.228 [2024-07-21 11:58:16.007498] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.228 BaseBdev1 00:16:17.228 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:17.228 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:17.486 BaseBdev2_malloc 00:16:17.486 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:17.744 true 00:16:17.744 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:18.002 [2024-07-21 11:58:16.732315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:18.002 [2024-07-21 11:58:16.732475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:18.002 [2024-07-21 11:58:16.732543] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:16:18.002 [2024-07-21 11:58:16.732584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:18.002 [2024-07-21 11:58:16.735099] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:18.002 [2024-07-21 11:58:16.735175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:18.002 BaseBdev2 00:16:18.002 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:16:18.260 [2024-07-21 11:58:16.960427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.260 [2024-07-21 11:58:16.962656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:18.260 [2024-07-21 11:58:16.962998] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008180 00:16:18.260 [2024-07-21 11:58:16.963015] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:18.260 [2024-07-21 11:58:16.963184] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:16:18.260 [2024-07-21 11:58:16.963648] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008180 00:16:18.260 [2024-07-21 11:58:16.963674] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008180 00:16:18.260 [2024-07-21 11:58:16.963867] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.260 11:58:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.518 11:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.519 "name": "raid_bdev1", 00:16:18.519 "uuid": "82cfb8b4-57af-4851-9340-93fd24cf09b7", 00:16:18.519 "strip_size_kb": 0, 00:16:18.519 "state": "online", 00:16:18.519 "raid_level": "raid1", 00:16:18.519 "superblock": true, 00:16:18.519 "num_base_bdevs": 2, 00:16:18.519 "num_base_bdevs_discovered": 2, 00:16:18.519 "num_base_bdevs_operational": 2, 00:16:18.519 "base_bdevs_list": [ 00:16:18.519 { 00:16:18.519 "name": "BaseBdev1", 00:16:18.519 "uuid": "6eb7666d-0189-54ca-b77a-08c6560f1994", 00:16:18.519 "is_configured": true, 00:16:18.519 "data_offset": 2048, 00:16:18.519 "data_size": 63488 00:16:18.519 }, 00:16:18.519 { 00:16:18.519 "name": "BaseBdev2", 00:16:18.519 "uuid": "e88617bd-9632-5fee-8744-d75ad7a39678", 00:16:18.519 "is_configured": true, 00:16:18.519 "data_offset": 2048, 00:16:18.519 "data_size": 63488 00:16:18.519 } 00:16:18.519 ] 00:16:18.519 }' 00:16:18.519 11:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.519 11:58:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.082 11:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:19.082 11:58:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:19.082 [2024-07-21 11:58:17.925072] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:16:20.013 11:58:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:20.270 [2024-07-21 11:58:19.071675] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:20.270 [2024-07-21 11:58:19.071798] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.270 [2024-07-21 11:58:19.072076] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.270 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.527 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.527 "name": "raid_bdev1", 00:16:20.527 "uuid": "82cfb8b4-57af-4851-9340-93fd24cf09b7", 00:16:20.527 "strip_size_kb": 0, 00:16:20.527 "state": "online", 00:16:20.527 "raid_level": "raid1", 00:16:20.527 "superblock": true, 00:16:20.527 "num_base_bdevs": 2, 00:16:20.527 "num_base_bdevs_discovered": 1, 00:16:20.527 "num_base_bdevs_operational": 1, 00:16:20.527 "base_bdevs_list": [ 00:16:20.527 { 00:16:20.527 "name": null, 00:16:20.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.527 "is_configured": false, 00:16:20.527 "data_offset": 2048, 00:16:20.527 "data_size": 63488 00:16:20.527 }, 00:16:20.527 { 00:16:20.527 "name": "BaseBdev2", 00:16:20.527 "uuid": "e88617bd-9632-5fee-8744-d75ad7a39678", 00:16:20.527 "is_configured": true, 00:16:20.527 "data_offset": 2048, 00:16:20.527 "data_size": 63488 00:16:20.527 } 00:16:20.527 ] 00:16:20.527 }' 00:16:20.527 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.527 11:58:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.460 11:58:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:21.460 [2024-07-21 11:58:20.195376] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.460 [2024-07-21 11:58:20.195420] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.460 [2024-07-21 11:58:20.198237] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.460 [2024-07-21 11:58:20.198311] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.460 [2024-07-21 11:58:20.198376] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.460 [2024-07-21 11:58:20.198389] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state offline 00:16:21.460 0 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 135633 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 135633 ']' 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 135633 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135633 00:16:21.460 killing process with pid 135633 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135633' 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 135633 00:16:21.460 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 135633 00:16:21.460 [2024-07-21 11:58:20.237719] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:21.460 [2024-07-21 11:58:20.253445] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8ZGuPiRLK9 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:21.718 00:16:21.718 real 0m6.267s 00:16:21.718 user 0m10.096s 00:16:21.718 sys 0m0.799s 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:21.718 11:58:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.718 ************************************ 00:16:21.718 END TEST raid_write_error_test 00:16:21.718 ************************************ 00:16:21.718 11:58:20 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:16:21.718 11:58:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:16:21.718 11:58:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:16:21.718 11:58:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:21.718 11:58:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:21.718 11:58:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:21.718 ************************************ 00:16:21.718 START TEST raid_state_function_test 00:16:21.718 ************************************ 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 false 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:21.718 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:21.975 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:21.975 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=135811 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 135811' 00:16:21.976 Process raid pid: 135811 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 135811 /var/tmp/spdk-raid.sock 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 135811 ']' 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:21.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:21.976 11:58:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.976 [2024-07-21 11:58:20.641195] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:21.976 [2024-07-21 11:58:20.641454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.976 [2024-07-21 11:58:20.813144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.233 [2024-07-21 11:58:20.907686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.233 [2024-07-21 11:58:20.964130] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.798 11:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:22.798 11:58:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:16:22.798 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:23.056 [2024-07-21 11:58:21.876248] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:23.056 [2024-07-21 11:58:21.876381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:23.056 [2024-07-21 11:58:21.876414] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:23.056 [2024-07-21 11:58:21.876437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:23.056 [2024-07-21 11:58:21.876445] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:23.056 [2024-07-21 11:58:21.876486] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.056 11:58:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.314 11:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:23.314 "name": "Existed_Raid", 00:16:23.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.314 "strip_size_kb": 64, 00:16:23.314 "state": "configuring", 00:16:23.314 "raid_level": "raid0", 00:16:23.314 "superblock": false, 00:16:23.314 "num_base_bdevs": 3, 00:16:23.314 "num_base_bdevs_discovered": 0, 00:16:23.314 "num_base_bdevs_operational": 3, 00:16:23.314 "base_bdevs_list": [ 00:16:23.314 { 00:16:23.314 "name": "BaseBdev1", 00:16:23.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.314 "is_configured": false, 00:16:23.314 "data_offset": 0, 00:16:23.314 "data_size": 0 00:16:23.314 }, 00:16:23.314 { 00:16:23.314 "name": "BaseBdev2", 00:16:23.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.314 "is_configured": false, 00:16:23.314 "data_offset": 0, 00:16:23.314 "data_size": 0 00:16:23.314 }, 00:16:23.314 { 00:16:23.314 "name": "BaseBdev3", 00:16:23.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.314 "is_configured": false, 00:16:23.314 "data_offset": 0, 00:16:23.314 "data_size": 0 00:16:23.314 } 00:16:23.314 ] 00:16:23.314 }' 00:16:23.314 11:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:23.314 11:58:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.247 11:58:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:24.247 [2024-07-21 11:58:23.060342] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.247 [2024-07-21 11:58:23.060410] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:24.247 11:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:24.505 [2024-07-21 11:58:23.344457] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:24.505 [2024-07-21 11:58:23.344586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:24.505 [2024-07-21 11:58:23.344617] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:24.505 [2024-07-21 11:58:23.344638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:24.505 [2024-07-21 11:58:23.344646] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:24.505 [2024-07-21 11:58:23.344671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:24.505 11:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:24.769 [2024-07-21 11:58:23.627748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:24.769 BaseBdev1 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:25.027 11:58:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:25.285 [ 00:16:25.285 { 00:16:25.285 "name": "BaseBdev1", 00:16:25.285 "aliases": [ 00:16:25.285 "e8892337-a1fb-418f-96e5-25ad6e4101c9" 00:16:25.285 ], 00:16:25.285 "product_name": "Malloc disk", 00:16:25.285 "block_size": 512, 00:16:25.285 "num_blocks": 65536, 00:16:25.285 "uuid": "e8892337-a1fb-418f-96e5-25ad6e4101c9", 00:16:25.285 "assigned_rate_limits": { 00:16:25.285 "rw_ios_per_sec": 0, 00:16:25.285 "rw_mbytes_per_sec": 0, 00:16:25.285 "r_mbytes_per_sec": 0, 00:16:25.285 "w_mbytes_per_sec": 0 00:16:25.285 }, 00:16:25.285 "claimed": true, 00:16:25.285 "claim_type": "exclusive_write", 00:16:25.285 "zoned": false, 00:16:25.285 "supported_io_types": { 00:16:25.285 "read": true, 00:16:25.285 "write": true, 00:16:25.285 "unmap": true, 00:16:25.285 "write_zeroes": true, 00:16:25.285 "flush": true, 00:16:25.285 "reset": true, 00:16:25.285 "compare": false, 00:16:25.286 "compare_and_write": false, 00:16:25.286 "abort": true, 00:16:25.286 "nvme_admin": false, 00:16:25.286 "nvme_io": false 00:16:25.286 }, 00:16:25.286 "memory_domains": [ 00:16:25.286 { 00:16:25.286 "dma_device_id": "system", 00:16:25.286 "dma_device_type": 1 00:16:25.286 }, 00:16:25.286 { 00:16:25.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.286 "dma_device_type": 2 00:16:25.286 } 00:16:25.286 ], 00:16:25.286 "driver_specific": {} 00:16:25.286 } 00:16:25.286 ] 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.286 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:25.543 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:25.543 "name": "Existed_Raid", 00:16:25.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.543 "strip_size_kb": 64, 00:16:25.543 "state": "configuring", 00:16:25.543 "raid_level": "raid0", 00:16:25.543 "superblock": false, 00:16:25.543 "num_base_bdevs": 3, 00:16:25.543 "num_base_bdevs_discovered": 1, 00:16:25.543 "num_base_bdevs_operational": 3, 00:16:25.543 "base_bdevs_list": [ 00:16:25.543 { 00:16:25.543 "name": "BaseBdev1", 00:16:25.543 "uuid": "e8892337-a1fb-418f-96e5-25ad6e4101c9", 00:16:25.543 "is_configured": true, 00:16:25.543 "data_offset": 0, 00:16:25.543 "data_size": 65536 00:16:25.543 }, 00:16:25.543 { 00:16:25.543 "name": "BaseBdev2", 00:16:25.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.543 "is_configured": false, 00:16:25.543 "data_offset": 0, 00:16:25.543 "data_size": 0 00:16:25.543 }, 00:16:25.543 { 00:16:25.543 "name": "BaseBdev3", 00:16:25.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.543 "is_configured": false, 00:16:25.543 "data_offset": 0, 00:16:25.543 "data_size": 0 00:16:25.543 } 00:16:25.543 ] 00:16:25.543 }' 00:16:25.543 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:25.543 11:58:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.108 11:58:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:26.674 [2024-07-21 11:58:25.244183] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:26.674 [2024-07-21 11:58:25.244274] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:26.674 [2024-07-21 11:58:25.464277] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.674 [2024-07-21 11:58:25.466470] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:26.674 [2024-07-21 11:58:25.466553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:26.674 [2024-07-21 11:58:25.466612] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:26.674 [2024-07-21 11:58:25.466663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.674 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:26.932 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.932 "name": "Existed_Raid", 00:16:26.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.932 "strip_size_kb": 64, 00:16:26.932 "state": "configuring", 00:16:26.932 "raid_level": "raid0", 00:16:26.932 "superblock": false, 00:16:26.932 "num_base_bdevs": 3, 00:16:26.932 "num_base_bdevs_discovered": 1, 00:16:26.932 "num_base_bdevs_operational": 3, 00:16:26.932 "base_bdevs_list": [ 00:16:26.932 { 00:16:26.932 "name": "BaseBdev1", 00:16:26.932 "uuid": "e8892337-a1fb-418f-96e5-25ad6e4101c9", 00:16:26.932 "is_configured": true, 00:16:26.932 "data_offset": 0, 00:16:26.932 "data_size": 65536 00:16:26.932 }, 00:16:26.932 { 00:16:26.932 "name": "BaseBdev2", 00:16:26.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.932 "is_configured": false, 00:16:26.932 "data_offset": 0, 00:16:26.932 "data_size": 0 00:16:26.932 }, 00:16:26.932 { 00:16:26.932 "name": "BaseBdev3", 00:16:26.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.932 "is_configured": false, 00:16:26.932 "data_offset": 0, 00:16:26.932 "data_size": 0 00:16:26.932 } 00:16:26.932 ] 00:16:26.932 }' 00:16:26.932 11:58:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.932 11:58:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.496 11:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:27.754 [2024-07-21 11:58:26.578749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:27.754 BaseBdev2 00:16:27.754 11:58:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:27.754 11:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:27.754 11:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:27.754 11:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:27.754 11:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:27.754 11:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:27.754 11:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:28.012 11:58:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:28.270 [ 00:16:28.270 { 00:16:28.270 "name": "BaseBdev2", 00:16:28.270 "aliases": [ 00:16:28.270 "bfae3cb5-c20e-426b-b496-20da7801ccdf" 00:16:28.270 ], 00:16:28.270 "product_name": "Malloc disk", 00:16:28.270 "block_size": 512, 00:16:28.270 "num_blocks": 65536, 00:16:28.270 "uuid": "bfae3cb5-c20e-426b-b496-20da7801ccdf", 00:16:28.270 "assigned_rate_limits": { 00:16:28.270 "rw_ios_per_sec": 0, 00:16:28.270 "rw_mbytes_per_sec": 0, 00:16:28.270 "r_mbytes_per_sec": 0, 00:16:28.270 "w_mbytes_per_sec": 0 00:16:28.270 }, 00:16:28.270 "claimed": true, 00:16:28.270 "claim_type": "exclusive_write", 00:16:28.270 "zoned": false, 00:16:28.270 "supported_io_types": { 00:16:28.270 "read": true, 00:16:28.270 "write": true, 00:16:28.270 "unmap": true, 00:16:28.270 "write_zeroes": true, 00:16:28.270 "flush": true, 00:16:28.270 "reset": true, 00:16:28.270 "compare": false, 00:16:28.270 "compare_and_write": false, 00:16:28.270 "abort": true, 00:16:28.270 "nvme_admin": false, 00:16:28.270 "nvme_io": false 00:16:28.270 }, 00:16:28.270 "memory_domains": [ 00:16:28.270 { 00:16:28.270 "dma_device_id": "system", 00:16:28.270 "dma_device_type": 1 00:16:28.270 }, 00:16:28.270 { 00:16:28.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.270 "dma_device_type": 2 00:16:28.270 } 00:16:28.270 ], 00:16:28.270 "driver_specific": {} 00:16:28.270 } 00:16:28.270 ] 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.270 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.528 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.528 "name": "Existed_Raid", 00:16:28.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.528 "strip_size_kb": 64, 00:16:28.528 "state": "configuring", 00:16:28.528 "raid_level": "raid0", 00:16:28.528 "superblock": false, 00:16:28.528 "num_base_bdevs": 3, 00:16:28.528 "num_base_bdevs_discovered": 2, 00:16:28.528 "num_base_bdevs_operational": 3, 00:16:28.528 "base_bdevs_list": [ 00:16:28.528 { 00:16:28.528 "name": "BaseBdev1", 00:16:28.528 "uuid": "e8892337-a1fb-418f-96e5-25ad6e4101c9", 00:16:28.528 "is_configured": true, 00:16:28.528 "data_offset": 0, 00:16:28.528 "data_size": 65536 00:16:28.528 }, 00:16:28.528 { 00:16:28.528 "name": "BaseBdev2", 00:16:28.528 "uuid": "bfae3cb5-c20e-426b-b496-20da7801ccdf", 00:16:28.528 "is_configured": true, 00:16:28.528 "data_offset": 0, 00:16:28.528 "data_size": 65536 00:16:28.528 }, 00:16:28.528 { 00:16:28.528 "name": "BaseBdev3", 00:16:28.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.528 "is_configured": false, 00:16:28.528 "data_offset": 0, 00:16:28.528 "data_size": 0 00:16:28.528 } 00:16:28.528 ] 00:16:28.528 }' 00:16:28.528 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.528 11:58:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.463 11:58:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:29.463 [2024-07-21 11:58:28.211468] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:29.463 [2024-07-21 11:58:28.211534] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:29.463 [2024-07-21 11:58:28.211544] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:29.463 [2024-07-21 11:58:28.211705] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:29.463 [2024-07-21 11:58:28.212224] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:29.463 [2024-07-21 11:58:28.212263] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:29.463 [2024-07-21 11:58:28.212575] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.463 BaseBdev3 00:16:29.463 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:29.463 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:29.463 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:29.463 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:29.463 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:29.463 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:29.463 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.721 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:29.978 [ 00:16:29.978 { 00:16:29.978 "name": "BaseBdev3", 00:16:29.978 "aliases": [ 00:16:29.978 "048845ff-2257-4e84-a0c6-e1671af4c2ef" 00:16:29.978 ], 00:16:29.978 "product_name": "Malloc disk", 00:16:29.978 "block_size": 512, 00:16:29.978 "num_blocks": 65536, 00:16:29.978 "uuid": "048845ff-2257-4e84-a0c6-e1671af4c2ef", 00:16:29.978 "assigned_rate_limits": { 00:16:29.978 "rw_ios_per_sec": 0, 00:16:29.978 "rw_mbytes_per_sec": 0, 00:16:29.978 "r_mbytes_per_sec": 0, 00:16:29.978 "w_mbytes_per_sec": 0 00:16:29.978 }, 00:16:29.978 "claimed": true, 00:16:29.978 "claim_type": "exclusive_write", 00:16:29.978 "zoned": false, 00:16:29.978 "supported_io_types": { 00:16:29.978 "read": true, 00:16:29.978 "write": true, 00:16:29.978 "unmap": true, 00:16:29.978 "write_zeroes": true, 00:16:29.978 "flush": true, 00:16:29.978 "reset": true, 00:16:29.978 "compare": false, 00:16:29.978 "compare_and_write": false, 00:16:29.978 "abort": true, 00:16:29.978 "nvme_admin": false, 00:16:29.978 "nvme_io": false 00:16:29.978 }, 00:16:29.978 "memory_domains": [ 00:16:29.978 { 00:16:29.978 "dma_device_id": "system", 00:16:29.979 "dma_device_type": 1 00:16:29.979 }, 00:16:29.979 { 00:16:29.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.979 "dma_device_type": 2 00:16:29.979 } 00:16:29.979 ], 00:16:29.979 "driver_specific": {} 00:16:29.979 } 00:16:29.979 ] 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.979 11:58:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.236 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.236 "name": "Existed_Raid", 00:16:30.236 "uuid": "f78d29bd-0c37-40c5-9af5-87ff706163e7", 00:16:30.236 "strip_size_kb": 64, 00:16:30.236 "state": "online", 00:16:30.236 "raid_level": "raid0", 00:16:30.236 "superblock": false, 00:16:30.236 "num_base_bdevs": 3, 00:16:30.236 "num_base_bdevs_discovered": 3, 00:16:30.236 "num_base_bdevs_operational": 3, 00:16:30.236 "base_bdevs_list": [ 00:16:30.236 { 00:16:30.236 "name": "BaseBdev1", 00:16:30.236 "uuid": "e8892337-a1fb-418f-96e5-25ad6e4101c9", 00:16:30.236 "is_configured": true, 00:16:30.236 "data_offset": 0, 00:16:30.236 "data_size": 65536 00:16:30.236 }, 00:16:30.236 { 00:16:30.236 "name": "BaseBdev2", 00:16:30.236 "uuid": "bfae3cb5-c20e-426b-b496-20da7801ccdf", 00:16:30.236 "is_configured": true, 00:16:30.236 "data_offset": 0, 00:16:30.236 "data_size": 65536 00:16:30.236 }, 00:16:30.236 { 00:16:30.236 "name": "BaseBdev3", 00:16:30.236 "uuid": "048845ff-2257-4e84-a0c6-e1671af4c2ef", 00:16:30.236 "is_configured": true, 00:16:30.236 "data_offset": 0, 00:16:30.236 "data_size": 65536 00:16:30.236 } 00:16:30.236 ] 00:16:30.236 }' 00:16:30.236 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.236 11:58:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:31.197 [2024-07-21 11:58:29.904379] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:31.197 "name": "Existed_Raid", 00:16:31.197 "aliases": [ 00:16:31.197 "f78d29bd-0c37-40c5-9af5-87ff706163e7" 00:16:31.197 ], 00:16:31.197 "product_name": "Raid Volume", 00:16:31.197 "block_size": 512, 00:16:31.197 "num_blocks": 196608, 00:16:31.197 "uuid": "f78d29bd-0c37-40c5-9af5-87ff706163e7", 00:16:31.197 "assigned_rate_limits": { 00:16:31.197 "rw_ios_per_sec": 0, 00:16:31.197 "rw_mbytes_per_sec": 0, 00:16:31.197 "r_mbytes_per_sec": 0, 00:16:31.197 "w_mbytes_per_sec": 0 00:16:31.197 }, 00:16:31.197 "claimed": false, 00:16:31.197 "zoned": false, 00:16:31.197 "supported_io_types": { 00:16:31.197 "read": true, 00:16:31.197 "write": true, 00:16:31.197 "unmap": true, 00:16:31.197 "write_zeroes": true, 00:16:31.197 "flush": true, 00:16:31.197 "reset": true, 00:16:31.197 "compare": false, 00:16:31.197 "compare_and_write": false, 00:16:31.197 "abort": false, 00:16:31.197 "nvme_admin": false, 00:16:31.197 "nvme_io": false 00:16:31.197 }, 00:16:31.197 "memory_domains": [ 00:16:31.197 { 00:16:31.197 "dma_device_id": "system", 00:16:31.197 "dma_device_type": 1 00:16:31.197 }, 00:16:31.197 { 00:16:31.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.197 "dma_device_type": 2 00:16:31.197 }, 00:16:31.197 { 00:16:31.197 "dma_device_id": "system", 00:16:31.197 "dma_device_type": 1 00:16:31.197 }, 00:16:31.197 { 00:16:31.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.197 "dma_device_type": 2 00:16:31.197 }, 00:16:31.197 { 00:16:31.197 "dma_device_id": "system", 00:16:31.197 "dma_device_type": 1 00:16:31.197 }, 00:16:31.197 { 00:16:31.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.197 "dma_device_type": 2 00:16:31.197 } 00:16:31.197 ], 00:16:31.197 "driver_specific": { 00:16:31.197 "raid": { 00:16:31.197 "uuid": "f78d29bd-0c37-40c5-9af5-87ff706163e7", 00:16:31.197 "strip_size_kb": 64, 00:16:31.197 "state": "online", 00:16:31.197 "raid_level": "raid0", 00:16:31.197 "superblock": false, 00:16:31.197 "num_base_bdevs": 3, 00:16:31.197 "num_base_bdevs_discovered": 3, 00:16:31.197 "num_base_bdevs_operational": 3, 00:16:31.197 "base_bdevs_list": [ 00:16:31.197 { 00:16:31.197 "name": "BaseBdev1", 00:16:31.197 "uuid": "e8892337-a1fb-418f-96e5-25ad6e4101c9", 00:16:31.197 "is_configured": true, 00:16:31.197 "data_offset": 0, 00:16:31.197 "data_size": 65536 00:16:31.197 }, 00:16:31.197 { 00:16:31.197 "name": "BaseBdev2", 00:16:31.197 "uuid": "bfae3cb5-c20e-426b-b496-20da7801ccdf", 00:16:31.197 "is_configured": true, 00:16:31.197 "data_offset": 0, 00:16:31.197 "data_size": 65536 00:16:31.197 }, 00:16:31.197 { 00:16:31.197 "name": "BaseBdev3", 00:16:31.197 "uuid": "048845ff-2257-4e84-a0c6-e1671af4c2ef", 00:16:31.197 "is_configured": true, 00:16:31.197 "data_offset": 0, 00:16:31.197 "data_size": 65536 00:16:31.197 } 00:16:31.197 ] 00:16:31.197 } 00:16:31.197 } 00:16:31.197 }' 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:31.197 BaseBdev2 00:16:31.197 BaseBdev3' 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:31.197 11:58:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.455 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.455 "name": "BaseBdev1", 00:16:31.455 "aliases": [ 00:16:31.455 "e8892337-a1fb-418f-96e5-25ad6e4101c9" 00:16:31.455 ], 00:16:31.455 "product_name": "Malloc disk", 00:16:31.455 "block_size": 512, 00:16:31.455 "num_blocks": 65536, 00:16:31.455 "uuid": "e8892337-a1fb-418f-96e5-25ad6e4101c9", 00:16:31.455 "assigned_rate_limits": { 00:16:31.455 "rw_ios_per_sec": 0, 00:16:31.455 "rw_mbytes_per_sec": 0, 00:16:31.455 "r_mbytes_per_sec": 0, 00:16:31.455 "w_mbytes_per_sec": 0 00:16:31.455 }, 00:16:31.455 "claimed": true, 00:16:31.455 "claim_type": "exclusive_write", 00:16:31.455 "zoned": false, 00:16:31.455 "supported_io_types": { 00:16:31.455 "read": true, 00:16:31.455 "write": true, 00:16:31.455 "unmap": true, 00:16:31.455 "write_zeroes": true, 00:16:31.455 "flush": true, 00:16:31.455 "reset": true, 00:16:31.455 "compare": false, 00:16:31.455 "compare_and_write": false, 00:16:31.455 "abort": true, 00:16:31.455 "nvme_admin": false, 00:16:31.455 "nvme_io": false 00:16:31.455 }, 00:16:31.455 "memory_domains": [ 00:16:31.455 { 00:16:31.455 "dma_device_id": "system", 00:16:31.455 "dma_device_type": 1 00:16:31.455 }, 00:16:31.455 { 00:16:31.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.455 "dma_device_type": 2 00:16:31.455 } 00:16:31.455 ], 00:16:31.455 "driver_specific": {} 00:16:31.455 }' 00:16:31.455 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.455 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:31.455 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:31.455 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.712 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:31.712 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:31.712 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.712 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:31.712 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:31.712 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.712 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:31.970 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:31.970 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:31.970 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:31.970 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:31.970 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:31.970 "name": "BaseBdev2", 00:16:31.970 "aliases": [ 00:16:31.970 "bfae3cb5-c20e-426b-b496-20da7801ccdf" 00:16:31.970 ], 00:16:31.970 "product_name": "Malloc disk", 00:16:31.970 "block_size": 512, 00:16:31.970 "num_blocks": 65536, 00:16:31.970 "uuid": "bfae3cb5-c20e-426b-b496-20da7801ccdf", 00:16:31.970 "assigned_rate_limits": { 00:16:31.970 "rw_ios_per_sec": 0, 00:16:31.970 "rw_mbytes_per_sec": 0, 00:16:31.970 "r_mbytes_per_sec": 0, 00:16:31.970 "w_mbytes_per_sec": 0 00:16:31.970 }, 00:16:31.970 "claimed": true, 00:16:31.970 "claim_type": "exclusive_write", 00:16:31.970 "zoned": false, 00:16:31.970 "supported_io_types": { 00:16:31.970 "read": true, 00:16:31.970 "write": true, 00:16:31.970 "unmap": true, 00:16:31.970 "write_zeroes": true, 00:16:31.970 "flush": true, 00:16:31.970 "reset": true, 00:16:31.970 "compare": false, 00:16:31.970 "compare_and_write": false, 00:16:31.970 "abort": true, 00:16:31.970 "nvme_admin": false, 00:16:31.970 "nvme_io": false 00:16:31.970 }, 00:16:31.970 "memory_domains": [ 00:16:31.970 { 00:16:31.970 "dma_device_id": "system", 00:16:31.970 "dma_device_type": 1 00:16:31.970 }, 00:16:31.970 { 00:16:31.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.970 "dma_device_type": 2 00:16:31.970 } 00:16:31.970 ], 00:16:31.970 "driver_specific": {} 00:16:31.970 }' 00:16:31.970 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.228 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.228 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.228 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.228 11:58:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:32.228 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:32.228 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.228 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:32.486 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:32.486 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.486 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:32.486 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:32.486 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:32.486 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:32.486 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:32.743 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:32.743 "name": "BaseBdev3", 00:16:32.743 "aliases": [ 00:16:32.743 "048845ff-2257-4e84-a0c6-e1671af4c2ef" 00:16:32.743 ], 00:16:32.743 "product_name": "Malloc disk", 00:16:32.743 "block_size": 512, 00:16:32.743 "num_blocks": 65536, 00:16:32.743 "uuid": "048845ff-2257-4e84-a0c6-e1671af4c2ef", 00:16:32.743 "assigned_rate_limits": { 00:16:32.743 "rw_ios_per_sec": 0, 00:16:32.743 "rw_mbytes_per_sec": 0, 00:16:32.743 "r_mbytes_per_sec": 0, 00:16:32.743 "w_mbytes_per_sec": 0 00:16:32.743 }, 00:16:32.743 "claimed": true, 00:16:32.743 "claim_type": "exclusive_write", 00:16:32.743 "zoned": false, 00:16:32.743 "supported_io_types": { 00:16:32.743 "read": true, 00:16:32.743 "write": true, 00:16:32.743 "unmap": true, 00:16:32.743 "write_zeroes": true, 00:16:32.743 "flush": true, 00:16:32.743 "reset": true, 00:16:32.743 "compare": false, 00:16:32.743 "compare_and_write": false, 00:16:32.743 "abort": true, 00:16:32.743 "nvme_admin": false, 00:16:32.743 "nvme_io": false 00:16:32.743 }, 00:16:32.743 "memory_domains": [ 00:16:32.743 { 00:16:32.743 "dma_device_id": "system", 00:16:32.743 "dma_device_type": 1 00:16:32.743 }, 00:16:32.743 { 00:16:32.743 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.743 "dma_device_type": 2 00:16:32.743 } 00:16:32.743 ], 00:16:32.743 "driver_specific": {} 00:16:32.743 }' 00:16:32.743 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.743 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:32.743 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:32.743 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.000 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:33.000 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:33.000 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.000 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:33.000 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:33.000 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.000 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:33.257 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:33.257 11:58:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:33.515 [2024-07-21 11:58:32.141002] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.516 [2024-07-21 11:58:32.141068] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:33.516 [2024-07-21 11:58:32.141152] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.516 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.773 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.773 "name": "Existed_Raid", 00:16:33.773 "uuid": "f78d29bd-0c37-40c5-9af5-87ff706163e7", 00:16:33.773 "strip_size_kb": 64, 00:16:33.773 "state": "offline", 00:16:33.773 "raid_level": "raid0", 00:16:33.773 "superblock": false, 00:16:33.773 "num_base_bdevs": 3, 00:16:33.773 "num_base_bdevs_discovered": 2, 00:16:33.773 "num_base_bdevs_operational": 2, 00:16:33.773 "base_bdevs_list": [ 00:16:33.773 { 00:16:33.773 "name": null, 00:16:33.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.773 "is_configured": false, 00:16:33.773 "data_offset": 0, 00:16:33.773 "data_size": 65536 00:16:33.773 }, 00:16:33.773 { 00:16:33.773 "name": "BaseBdev2", 00:16:33.773 "uuid": "bfae3cb5-c20e-426b-b496-20da7801ccdf", 00:16:33.773 "is_configured": true, 00:16:33.773 "data_offset": 0, 00:16:33.773 "data_size": 65536 00:16:33.773 }, 00:16:33.773 { 00:16:33.773 "name": "BaseBdev3", 00:16:33.773 "uuid": "048845ff-2257-4e84-a0c6-e1671af4c2ef", 00:16:33.773 "is_configured": true, 00:16:33.773 "data_offset": 0, 00:16:33.773 "data_size": 65536 00:16:33.773 } 00:16:33.773 ] 00:16:33.773 }' 00:16:33.773 11:58:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.773 11:58:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.337 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:34.337 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.337 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.337 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:34.595 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:34.595 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:34.595 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:34.853 [2024-07-21 11:58:33.603081] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:34.853 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:34.853 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:34.853 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.853 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:35.110 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:35.110 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.110 11:58:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:35.368 [2024-07-21 11:58:34.117840] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:35.368 [2024-07-21 11:58:34.117929] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:16:35.368 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:35.368 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:35.368 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.368 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:35.626 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:35.626 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:35.626 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:16:35.626 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:35.626 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:35.626 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:35.884 BaseBdev2 00:16:35.884 11:58:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:35.884 11:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:35.884 11:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:35.884 11:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:35.884 11:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:35.884 11:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:35.884 11:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.142 11:58:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:36.400 [ 00:16:36.400 { 00:16:36.400 "name": "BaseBdev2", 00:16:36.400 "aliases": [ 00:16:36.400 "6e92e01f-78ad-49d1-b538-57cf6f874f38" 00:16:36.400 ], 00:16:36.400 "product_name": "Malloc disk", 00:16:36.400 "block_size": 512, 00:16:36.401 "num_blocks": 65536, 00:16:36.401 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:36.401 "assigned_rate_limits": { 00:16:36.401 "rw_ios_per_sec": 0, 00:16:36.401 "rw_mbytes_per_sec": 0, 00:16:36.401 "r_mbytes_per_sec": 0, 00:16:36.401 "w_mbytes_per_sec": 0 00:16:36.401 }, 00:16:36.401 "claimed": false, 00:16:36.401 "zoned": false, 00:16:36.401 "supported_io_types": { 00:16:36.401 "read": true, 00:16:36.401 "write": true, 00:16:36.401 "unmap": true, 00:16:36.401 "write_zeroes": true, 00:16:36.401 "flush": true, 00:16:36.401 "reset": true, 00:16:36.401 "compare": false, 00:16:36.401 "compare_and_write": false, 00:16:36.401 "abort": true, 00:16:36.401 "nvme_admin": false, 00:16:36.401 "nvme_io": false 00:16:36.401 }, 00:16:36.401 "memory_domains": [ 00:16:36.401 { 00:16:36.401 "dma_device_id": "system", 00:16:36.401 "dma_device_type": 1 00:16:36.401 }, 00:16:36.401 { 00:16:36.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.401 "dma_device_type": 2 00:16:36.401 } 00:16:36.401 ], 00:16:36.401 "driver_specific": {} 00:16:36.401 } 00:16:36.401 ] 00:16:36.401 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:36.401 11:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:36.401 11:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:36.401 11:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:36.659 BaseBdev3 00:16:36.659 11:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:36.659 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:36.659 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:36.659 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:36.659 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:36.659 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:36.659 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:36.917 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:36.917 [ 00:16:36.917 { 00:16:36.917 "name": "BaseBdev3", 00:16:36.917 "aliases": [ 00:16:36.917 "1ab2831b-1e5f-418b-a8a4-5777dedaef5c" 00:16:36.917 ], 00:16:36.917 "product_name": "Malloc disk", 00:16:36.917 "block_size": 512, 00:16:36.917 "num_blocks": 65536, 00:16:36.917 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:36.917 "assigned_rate_limits": { 00:16:36.917 "rw_ios_per_sec": 0, 00:16:36.917 "rw_mbytes_per_sec": 0, 00:16:36.917 "r_mbytes_per_sec": 0, 00:16:36.917 "w_mbytes_per_sec": 0 00:16:36.917 }, 00:16:36.917 "claimed": false, 00:16:36.917 "zoned": false, 00:16:36.917 "supported_io_types": { 00:16:36.917 "read": true, 00:16:36.917 "write": true, 00:16:36.917 "unmap": true, 00:16:36.917 "write_zeroes": true, 00:16:36.917 "flush": true, 00:16:36.917 "reset": true, 00:16:36.917 "compare": false, 00:16:36.917 "compare_and_write": false, 00:16:36.917 "abort": true, 00:16:36.917 "nvme_admin": false, 00:16:36.917 "nvme_io": false 00:16:36.917 }, 00:16:36.917 "memory_domains": [ 00:16:36.917 { 00:16:36.917 "dma_device_id": "system", 00:16:36.917 "dma_device_type": 1 00:16:36.917 }, 00:16:36.917 { 00:16:36.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.917 "dma_device_type": 2 00:16:36.917 } 00:16:36.917 ], 00:16:36.917 "driver_specific": {} 00:16:36.917 } 00:16:36.917 ] 00:16:37.175 11:58:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:37.175 11:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:37.175 11:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:37.175 11:58:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:37.175 [2024-07-21 11:58:35.994350] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:37.175 [2024-07-21 11:58:35.994521] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:37.175 [2024-07-21 11:58:35.994646] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.175 [2024-07-21 11:58:35.997083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:37.175 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.741 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.741 "name": "Existed_Raid", 00:16:37.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.741 "strip_size_kb": 64, 00:16:37.741 "state": "configuring", 00:16:37.741 "raid_level": "raid0", 00:16:37.741 "superblock": false, 00:16:37.741 "num_base_bdevs": 3, 00:16:37.741 "num_base_bdevs_discovered": 2, 00:16:37.741 "num_base_bdevs_operational": 3, 00:16:37.741 "base_bdevs_list": [ 00:16:37.741 { 00:16:37.741 "name": "BaseBdev1", 00:16:37.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.741 "is_configured": false, 00:16:37.741 "data_offset": 0, 00:16:37.741 "data_size": 0 00:16:37.741 }, 00:16:37.741 { 00:16:37.741 "name": "BaseBdev2", 00:16:37.741 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:37.741 "is_configured": true, 00:16:37.741 "data_offset": 0, 00:16:37.741 "data_size": 65536 00:16:37.741 }, 00:16:37.741 { 00:16:37.741 "name": "BaseBdev3", 00:16:37.741 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:37.741 "is_configured": true, 00:16:37.741 "data_offset": 0, 00:16:37.741 "data_size": 65536 00:16:37.741 } 00:16:37.741 ] 00:16:37.741 }' 00:16:37.741 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.741 11:58:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.307 11:58:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:38.565 [2024-07-21 11:58:37.179350] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.565 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.566 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.825 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.825 "name": "Existed_Raid", 00:16:38.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.825 "strip_size_kb": 64, 00:16:38.825 "state": "configuring", 00:16:38.825 "raid_level": "raid0", 00:16:38.825 "superblock": false, 00:16:38.825 "num_base_bdevs": 3, 00:16:38.825 "num_base_bdevs_discovered": 1, 00:16:38.825 "num_base_bdevs_operational": 3, 00:16:38.825 "base_bdevs_list": [ 00:16:38.825 { 00:16:38.825 "name": "BaseBdev1", 00:16:38.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.825 "is_configured": false, 00:16:38.825 "data_offset": 0, 00:16:38.825 "data_size": 0 00:16:38.825 }, 00:16:38.825 { 00:16:38.825 "name": null, 00:16:38.825 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:38.825 "is_configured": false, 00:16:38.825 "data_offset": 0, 00:16:38.825 "data_size": 65536 00:16:38.825 }, 00:16:38.825 { 00:16:38.825 "name": "BaseBdev3", 00:16:38.825 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:38.825 "is_configured": true, 00:16:38.825 "data_offset": 0, 00:16:38.825 "data_size": 65536 00:16:38.825 } 00:16:38.825 ] 00:16:38.825 }' 00:16:38.825 11:58:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.825 11:58:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.393 11:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.393 11:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:39.651 11:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:39.651 11:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.910 [2024-07-21 11:58:38.677619] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.910 BaseBdev1 00:16:39.910 11:58:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:39.910 11:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:39.910 11:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:39.910 11:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:39.910 11:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:39.910 11:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:39.910 11:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.169 11:58:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.427 [ 00:16:40.427 { 00:16:40.427 "name": "BaseBdev1", 00:16:40.427 "aliases": [ 00:16:40.427 "f92b3a09-046a-41e4-8732-8a24be6d31d3" 00:16:40.427 ], 00:16:40.427 "product_name": "Malloc disk", 00:16:40.427 "block_size": 512, 00:16:40.427 "num_blocks": 65536, 00:16:40.427 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:40.427 "assigned_rate_limits": { 00:16:40.427 "rw_ios_per_sec": 0, 00:16:40.427 "rw_mbytes_per_sec": 0, 00:16:40.427 "r_mbytes_per_sec": 0, 00:16:40.427 "w_mbytes_per_sec": 0 00:16:40.427 }, 00:16:40.427 "claimed": true, 00:16:40.427 "claim_type": "exclusive_write", 00:16:40.427 "zoned": false, 00:16:40.427 "supported_io_types": { 00:16:40.427 "read": true, 00:16:40.427 "write": true, 00:16:40.427 "unmap": true, 00:16:40.427 "write_zeroes": true, 00:16:40.427 "flush": true, 00:16:40.427 "reset": true, 00:16:40.427 "compare": false, 00:16:40.427 "compare_and_write": false, 00:16:40.427 "abort": true, 00:16:40.427 "nvme_admin": false, 00:16:40.427 "nvme_io": false 00:16:40.427 }, 00:16:40.427 "memory_domains": [ 00:16:40.427 { 00:16:40.427 "dma_device_id": "system", 00:16:40.427 "dma_device_type": 1 00:16:40.427 }, 00:16:40.427 { 00:16:40.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.427 "dma_device_type": 2 00:16:40.427 } 00:16:40.427 ], 00:16:40.427 "driver_specific": {} 00:16:40.427 } 00:16:40.427 ] 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.427 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.687 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.687 "name": "Existed_Raid", 00:16:40.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.687 "strip_size_kb": 64, 00:16:40.687 "state": "configuring", 00:16:40.687 "raid_level": "raid0", 00:16:40.687 "superblock": false, 00:16:40.687 "num_base_bdevs": 3, 00:16:40.687 "num_base_bdevs_discovered": 2, 00:16:40.687 "num_base_bdevs_operational": 3, 00:16:40.687 "base_bdevs_list": [ 00:16:40.687 { 00:16:40.687 "name": "BaseBdev1", 00:16:40.687 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:40.687 "is_configured": true, 00:16:40.687 "data_offset": 0, 00:16:40.687 "data_size": 65536 00:16:40.687 }, 00:16:40.687 { 00:16:40.687 "name": null, 00:16:40.687 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:40.687 "is_configured": false, 00:16:40.687 "data_offset": 0, 00:16:40.687 "data_size": 65536 00:16:40.687 }, 00:16:40.687 { 00:16:40.687 "name": "BaseBdev3", 00:16:40.687 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:40.687 "is_configured": true, 00:16:40.687 "data_offset": 0, 00:16:40.687 "data_size": 65536 00:16:40.687 } 00:16:40.687 ] 00:16:40.687 }' 00:16:40.687 11:58:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.687 11:58:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.253 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.253 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:41.511 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:41.511 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:41.769 [2024-07-21 11:58:40.562354] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.769 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.027 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.027 "name": "Existed_Raid", 00:16:42.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.027 "strip_size_kb": 64, 00:16:42.027 "state": "configuring", 00:16:42.027 "raid_level": "raid0", 00:16:42.027 "superblock": false, 00:16:42.027 "num_base_bdevs": 3, 00:16:42.027 "num_base_bdevs_discovered": 1, 00:16:42.027 "num_base_bdevs_operational": 3, 00:16:42.027 "base_bdevs_list": [ 00:16:42.027 { 00:16:42.027 "name": "BaseBdev1", 00:16:42.027 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:42.027 "is_configured": true, 00:16:42.027 "data_offset": 0, 00:16:42.027 "data_size": 65536 00:16:42.027 }, 00:16:42.027 { 00:16:42.027 "name": null, 00:16:42.027 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:42.027 "is_configured": false, 00:16:42.027 "data_offset": 0, 00:16:42.027 "data_size": 65536 00:16:42.027 }, 00:16:42.027 { 00:16:42.027 "name": null, 00:16:42.027 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:42.027 "is_configured": false, 00:16:42.027 "data_offset": 0, 00:16:42.027 "data_size": 65536 00:16:42.027 } 00:16:42.027 ] 00:16:42.027 }' 00:16:42.027 11:58:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.027 11:58:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.961 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.961 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:42.961 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:42.961 11:58:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:43.219 [2024-07-21 11:58:42.022707] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.219 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.477 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:43.477 "name": "Existed_Raid", 00:16:43.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.477 "strip_size_kb": 64, 00:16:43.477 "state": "configuring", 00:16:43.477 "raid_level": "raid0", 00:16:43.477 "superblock": false, 00:16:43.477 "num_base_bdevs": 3, 00:16:43.477 "num_base_bdevs_discovered": 2, 00:16:43.477 "num_base_bdevs_operational": 3, 00:16:43.477 "base_bdevs_list": [ 00:16:43.477 { 00:16:43.477 "name": "BaseBdev1", 00:16:43.477 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:43.477 "is_configured": true, 00:16:43.477 "data_offset": 0, 00:16:43.477 "data_size": 65536 00:16:43.477 }, 00:16:43.477 { 00:16:43.477 "name": null, 00:16:43.477 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:43.477 "is_configured": false, 00:16:43.477 "data_offset": 0, 00:16:43.477 "data_size": 65536 00:16:43.477 }, 00:16:43.477 { 00:16:43.477 "name": "BaseBdev3", 00:16:43.477 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:43.477 "is_configured": true, 00:16:43.477 "data_offset": 0, 00:16:43.477 "data_size": 65536 00:16:43.477 } 00:16:43.477 ] 00:16:43.477 }' 00:16:43.477 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:43.477 11:58:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.042 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:44.042 11:58:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:44.608 [2024-07-21 11:58:43.427151] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.608 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.865 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.865 "name": "Existed_Raid", 00:16:44.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.865 "strip_size_kb": 64, 00:16:44.865 "state": "configuring", 00:16:44.865 "raid_level": "raid0", 00:16:44.865 "superblock": false, 00:16:44.865 "num_base_bdevs": 3, 00:16:44.865 "num_base_bdevs_discovered": 1, 00:16:44.865 "num_base_bdevs_operational": 3, 00:16:44.865 "base_bdevs_list": [ 00:16:44.865 { 00:16:44.865 "name": null, 00:16:44.865 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:44.865 "is_configured": false, 00:16:44.865 "data_offset": 0, 00:16:44.865 "data_size": 65536 00:16:44.865 }, 00:16:44.865 { 00:16:44.865 "name": null, 00:16:44.865 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:44.865 "is_configured": false, 00:16:44.865 "data_offset": 0, 00:16:44.865 "data_size": 65536 00:16:44.865 }, 00:16:44.865 { 00:16:44.865 "name": "BaseBdev3", 00:16:44.865 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:44.865 "is_configured": true, 00:16:44.865 "data_offset": 0, 00:16:44.865 "data_size": 65536 00:16:44.865 } 00:16:44.865 ] 00:16:44.865 }' 00:16:44.865 11:58:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.865 11:58:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.822 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.822 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:46.079 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:46.080 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:46.080 [2024-07-21 11:58:44.933961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.336 11:58:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.594 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:46.594 "name": "Existed_Raid", 00:16:46.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.594 "strip_size_kb": 64, 00:16:46.594 "state": "configuring", 00:16:46.594 "raid_level": "raid0", 00:16:46.594 "superblock": false, 00:16:46.594 "num_base_bdevs": 3, 00:16:46.594 "num_base_bdevs_discovered": 2, 00:16:46.594 "num_base_bdevs_operational": 3, 00:16:46.594 "base_bdevs_list": [ 00:16:46.594 { 00:16:46.594 "name": null, 00:16:46.594 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:46.594 "is_configured": false, 00:16:46.594 "data_offset": 0, 00:16:46.594 "data_size": 65536 00:16:46.594 }, 00:16:46.594 { 00:16:46.594 "name": "BaseBdev2", 00:16:46.594 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:46.594 "is_configured": true, 00:16:46.594 "data_offset": 0, 00:16:46.594 "data_size": 65536 00:16:46.594 }, 00:16:46.594 { 00:16:46.594 "name": "BaseBdev3", 00:16:46.594 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:46.594 "is_configured": true, 00:16:46.594 "data_offset": 0, 00:16:46.594 "data_size": 65536 00:16:46.594 } 00:16:46.594 ] 00:16:46.594 }' 00:16:46.594 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:46.594 11:58:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.159 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.159 11:58:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:47.417 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:47.417 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.417 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:47.675 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f92b3a09-046a-41e4-8732-8a24be6d31d3 00:16:47.932 [2024-07-21 11:58:46.603786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:47.932 [2024-07-21 11:58:46.604152] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:16:47.932 [2024-07-21 11:58:46.604206] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:47.932 [2024-07-21 11:58:46.604442] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:16:47.932 [2024-07-21 11:58:46.604961] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:16:47.932 [2024-07-21 11:58:46.605101] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:16:47.932 [2024-07-21 11:58:46.605484] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.932 NewBaseBdev 00:16:47.933 11:58:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:47.933 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:16:47.933 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:47.933 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:16:47.933 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:47.933 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:47.933 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:48.190 11:58:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:48.448 [ 00:16:48.448 { 00:16:48.448 "name": "NewBaseBdev", 00:16:48.448 "aliases": [ 00:16:48.448 "f92b3a09-046a-41e4-8732-8a24be6d31d3" 00:16:48.448 ], 00:16:48.448 "product_name": "Malloc disk", 00:16:48.448 "block_size": 512, 00:16:48.448 "num_blocks": 65536, 00:16:48.448 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:48.448 "assigned_rate_limits": { 00:16:48.448 "rw_ios_per_sec": 0, 00:16:48.448 "rw_mbytes_per_sec": 0, 00:16:48.448 "r_mbytes_per_sec": 0, 00:16:48.448 "w_mbytes_per_sec": 0 00:16:48.448 }, 00:16:48.448 "claimed": true, 00:16:48.448 "claim_type": "exclusive_write", 00:16:48.448 "zoned": false, 00:16:48.448 "supported_io_types": { 00:16:48.448 "read": true, 00:16:48.448 "write": true, 00:16:48.448 "unmap": true, 00:16:48.448 "write_zeroes": true, 00:16:48.448 "flush": true, 00:16:48.448 "reset": true, 00:16:48.448 "compare": false, 00:16:48.448 "compare_and_write": false, 00:16:48.448 "abort": true, 00:16:48.448 "nvme_admin": false, 00:16:48.448 "nvme_io": false 00:16:48.448 }, 00:16:48.448 "memory_domains": [ 00:16:48.448 { 00:16:48.448 "dma_device_id": "system", 00:16:48.448 "dma_device_type": 1 00:16:48.448 }, 00:16:48.448 { 00:16:48.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.448 "dma_device_type": 2 00:16:48.448 } 00:16:48.448 ], 00:16:48.448 "driver_specific": {} 00:16:48.448 } 00:16:48.448 ] 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.448 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.706 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:48.706 "name": "Existed_Raid", 00:16:48.706 "uuid": "1b983e05-89e9-448f-bf69-2bc9043b25cd", 00:16:48.706 "strip_size_kb": 64, 00:16:48.706 "state": "online", 00:16:48.706 "raid_level": "raid0", 00:16:48.706 "superblock": false, 00:16:48.706 "num_base_bdevs": 3, 00:16:48.706 "num_base_bdevs_discovered": 3, 00:16:48.706 "num_base_bdevs_operational": 3, 00:16:48.706 "base_bdevs_list": [ 00:16:48.706 { 00:16:48.706 "name": "NewBaseBdev", 00:16:48.706 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:48.706 "is_configured": true, 00:16:48.706 "data_offset": 0, 00:16:48.706 "data_size": 65536 00:16:48.706 }, 00:16:48.706 { 00:16:48.706 "name": "BaseBdev2", 00:16:48.706 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:48.706 "is_configured": true, 00:16:48.706 "data_offset": 0, 00:16:48.706 "data_size": 65536 00:16:48.706 }, 00:16:48.706 { 00:16:48.706 "name": "BaseBdev3", 00:16:48.706 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:48.706 "is_configured": true, 00:16:48.706 "data_offset": 0, 00:16:48.706 "data_size": 65536 00:16:48.706 } 00:16:48.706 ] 00:16:48.706 }' 00:16:48.706 11:58:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:48.706 11:58:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:49.272 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:49.540 [2024-07-21 11:58:48.297093] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.540 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:49.540 "name": "Existed_Raid", 00:16:49.540 "aliases": [ 00:16:49.540 "1b983e05-89e9-448f-bf69-2bc9043b25cd" 00:16:49.540 ], 00:16:49.540 "product_name": "Raid Volume", 00:16:49.540 "block_size": 512, 00:16:49.540 "num_blocks": 196608, 00:16:49.540 "uuid": "1b983e05-89e9-448f-bf69-2bc9043b25cd", 00:16:49.540 "assigned_rate_limits": { 00:16:49.540 "rw_ios_per_sec": 0, 00:16:49.540 "rw_mbytes_per_sec": 0, 00:16:49.540 "r_mbytes_per_sec": 0, 00:16:49.540 "w_mbytes_per_sec": 0 00:16:49.540 }, 00:16:49.540 "claimed": false, 00:16:49.540 "zoned": false, 00:16:49.540 "supported_io_types": { 00:16:49.540 "read": true, 00:16:49.540 "write": true, 00:16:49.540 "unmap": true, 00:16:49.540 "write_zeroes": true, 00:16:49.540 "flush": true, 00:16:49.540 "reset": true, 00:16:49.540 "compare": false, 00:16:49.540 "compare_and_write": false, 00:16:49.540 "abort": false, 00:16:49.540 "nvme_admin": false, 00:16:49.540 "nvme_io": false 00:16:49.540 }, 00:16:49.540 "memory_domains": [ 00:16:49.540 { 00:16:49.540 "dma_device_id": "system", 00:16:49.540 "dma_device_type": 1 00:16:49.540 }, 00:16:49.540 { 00:16:49.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.540 "dma_device_type": 2 00:16:49.540 }, 00:16:49.540 { 00:16:49.540 "dma_device_id": "system", 00:16:49.540 "dma_device_type": 1 00:16:49.540 }, 00:16:49.540 { 00:16:49.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.540 "dma_device_type": 2 00:16:49.540 }, 00:16:49.540 { 00:16:49.540 "dma_device_id": "system", 00:16:49.540 "dma_device_type": 1 00:16:49.540 }, 00:16:49.540 { 00:16:49.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.540 "dma_device_type": 2 00:16:49.540 } 00:16:49.540 ], 00:16:49.540 "driver_specific": { 00:16:49.540 "raid": { 00:16:49.540 "uuid": "1b983e05-89e9-448f-bf69-2bc9043b25cd", 00:16:49.540 "strip_size_kb": 64, 00:16:49.540 "state": "online", 00:16:49.540 "raid_level": "raid0", 00:16:49.540 "superblock": false, 00:16:49.540 "num_base_bdevs": 3, 00:16:49.540 "num_base_bdevs_discovered": 3, 00:16:49.540 "num_base_bdevs_operational": 3, 00:16:49.540 "base_bdevs_list": [ 00:16:49.540 { 00:16:49.540 "name": "NewBaseBdev", 00:16:49.540 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:49.540 "is_configured": true, 00:16:49.540 "data_offset": 0, 00:16:49.540 "data_size": 65536 00:16:49.540 }, 00:16:49.540 { 00:16:49.540 "name": "BaseBdev2", 00:16:49.540 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:49.540 "is_configured": true, 00:16:49.540 "data_offset": 0, 00:16:49.540 "data_size": 65536 00:16:49.540 }, 00:16:49.540 { 00:16:49.540 "name": "BaseBdev3", 00:16:49.540 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:49.540 "is_configured": true, 00:16:49.540 "data_offset": 0, 00:16:49.540 "data_size": 65536 00:16:49.540 } 00:16:49.540 ] 00:16:49.540 } 00:16:49.540 } 00:16:49.540 }' 00:16:49.540 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:49.540 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:49.540 BaseBdev2 00:16:49.540 BaseBdev3' 00:16:49.540 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:49.540 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:49.540 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:49.798 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:49.798 "name": "NewBaseBdev", 00:16:49.798 "aliases": [ 00:16:49.798 "f92b3a09-046a-41e4-8732-8a24be6d31d3" 00:16:49.798 ], 00:16:49.798 "product_name": "Malloc disk", 00:16:49.798 "block_size": 512, 00:16:49.798 "num_blocks": 65536, 00:16:49.798 "uuid": "f92b3a09-046a-41e4-8732-8a24be6d31d3", 00:16:49.798 "assigned_rate_limits": { 00:16:49.798 "rw_ios_per_sec": 0, 00:16:49.798 "rw_mbytes_per_sec": 0, 00:16:49.798 "r_mbytes_per_sec": 0, 00:16:49.798 "w_mbytes_per_sec": 0 00:16:49.798 }, 00:16:49.798 "claimed": true, 00:16:49.798 "claim_type": "exclusive_write", 00:16:49.798 "zoned": false, 00:16:49.798 "supported_io_types": { 00:16:49.798 "read": true, 00:16:49.798 "write": true, 00:16:49.798 "unmap": true, 00:16:49.798 "write_zeroes": true, 00:16:49.798 "flush": true, 00:16:49.798 "reset": true, 00:16:49.798 "compare": false, 00:16:49.798 "compare_and_write": false, 00:16:49.798 "abort": true, 00:16:49.798 "nvme_admin": false, 00:16:49.798 "nvme_io": false 00:16:49.798 }, 00:16:49.798 "memory_domains": [ 00:16:49.798 { 00:16:49.798 "dma_device_id": "system", 00:16:49.798 "dma_device_type": 1 00:16:49.798 }, 00:16:49.798 { 00:16:49.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.798 "dma_device_type": 2 00:16:49.798 } 00:16:49.798 ], 00:16:49.798 "driver_specific": {} 00:16:49.798 }' 00:16:49.798 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.055 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.312 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.312 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.312 11:58:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.312 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:50.312 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:50.569 "name": "BaseBdev2", 00:16:50.569 "aliases": [ 00:16:50.569 "6e92e01f-78ad-49d1-b538-57cf6f874f38" 00:16:50.569 ], 00:16:50.569 "product_name": "Malloc disk", 00:16:50.569 "block_size": 512, 00:16:50.569 "num_blocks": 65536, 00:16:50.569 "uuid": "6e92e01f-78ad-49d1-b538-57cf6f874f38", 00:16:50.569 "assigned_rate_limits": { 00:16:50.569 "rw_ios_per_sec": 0, 00:16:50.569 "rw_mbytes_per_sec": 0, 00:16:50.569 "r_mbytes_per_sec": 0, 00:16:50.569 "w_mbytes_per_sec": 0 00:16:50.569 }, 00:16:50.569 "claimed": true, 00:16:50.569 "claim_type": "exclusive_write", 00:16:50.569 "zoned": false, 00:16:50.569 "supported_io_types": { 00:16:50.569 "read": true, 00:16:50.569 "write": true, 00:16:50.569 "unmap": true, 00:16:50.569 "write_zeroes": true, 00:16:50.569 "flush": true, 00:16:50.569 "reset": true, 00:16:50.569 "compare": false, 00:16:50.569 "compare_and_write": false, 00:16:50.569 "abort": true, 00:16:50.569 "nvme_admin": false, 00:16:50.569 "nvme_io": false 00:16:50.569 }, 00:16:50.569 "memory_domains": [ 00:16:50.569 { 00:16:50.569 "dma_device_id": "system", 00:16:50.569 "dma_device_type": 1 00:16:50.569 }, 00:16:50.569 { 00:16:50.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.569 "dma_device_type": 2 00:16:50.569 } 00:16:50.569 ], 00:16:50.569 "driver_specific": {} 00:16:50.569 }' 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:50.569 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:50.827 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:51.085 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:51.085 "name": "BaseBdev3", 00:16:51.085 "aliases": [ 00:16:51.085 "1ab2831b-1e5f-418b-a8a4-5777dedaef5c" 00:16:51.085 ], 00:16:51.085 "product_name": "Malloc disk", 00:16:51.085 "block_size": 512, 00:16:51.085 "num_blocks": 65536, 00:16:51.085 "uuid": "1ab2831b-1e5f-418b-a8a4-5777dedaef5c", 00:16:51.085 "assigned_rate_limits": { 00:16:51.085 "rw_ios_per_sec": 0, 00:16:51.085 "rw_mbytes_per_sec": 0, 00:16:51.085 "r_mbytes_per_sec": 0, 00:16:51.085 "w_mbytes_per_sec": 0 00:16:51.085 }, 00:16:51.085 "claimed": true, 00:16:51.085 "claim_type": "exclusive_write", 00:16:51.085 "zoned": false, 00:16:51.085 "supported_io_types": { 00:16:51.085 "read": true, 00:16:51.085 "write": true, 00:16:51.085 "unmap": true, 00:16:51.085 "write_zeroes": true, 00:16:51.085 "flush": true, 00:16:51.085 "reset": true, 00:16:51.085 "compare": false, 00:16:51.085 "compare_and_write": false, 00:16:51.085 "abort": true, 00:16:51.085 "nvme_admin": false, 00:16:51.085 "nvme_io": false 00:16:51.085 }, 00:16:51.085 "memory_domains": [ 00:16:51.085 { 00:16:51.085 "dma_device_id": "system", 00:16:51.085 "dma_device_type": 1 00:16:51.085 }, 00:16:51.085 { 00:16:51.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.085 "dma_device_type": 2 00:16:51.085 } 00:16:51.085 ], 00:16:51.085 "driver_specific": {} 00:16:51.085 }' 00:16:51.085 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.085 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:51.085 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:51.085 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.342 11:58:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:51.342 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:51.342 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.342 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:51.342 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:51.342 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.343 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:51.600 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:51.600 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:51.858 [2024-07-21 11:58:50.493323] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.858 [2024-07-21 11:58:50.493672] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.858 [2024-07-21 11:58:50.493888] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.858 [2024-07-21 11:58:50.494109] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.858 [2024-07-21 11:58:50.494267] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 135811 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 135811 ']' 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 135811 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 135811 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 135811' 00:16:51.858 killing process with pid 135811 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 135811 00:16:51.858 [2024-07-21 11:58:50.536838] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.858 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 135811 00:16:51.858 [2024-07-21 11:58:50.582689] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:52.115 ************************************ 00:16:52.116 END TEST raid_state_function_test 00:16:52.116 ************************************ 00:16:52.116 11:58:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:16:52.116 00:16:52.116 real 0m30.358s 00:16:52.116 user 0m57.692s 00:16:52.116 sys 0m3.603s 00:16:52.116 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:52.116 11:58:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.116 11:58:50 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:16:52.116 11:58:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:16:52.116 11:58:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:52.116 11:58:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:52.373 ************************************ 00:16:52.373 START TEST raid_state_function_test_sb 00:16:52.373 ************************************ 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 3 true 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:16:52.373 11:58:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=136807 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 136807' 00:16:52.373 Process raid pid: 136807 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 136807 /var/tmp/spdk-raid.sock 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 136807 ']' 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:52.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:52.373 11:58:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.373 [2024-07-21 11:58:51.066325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:52.373 [2024-07-21 11:58:51.066615] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.373 [2024-07-21 11:58:51.227118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.630 [2024-07-21 11:58:51.316547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.630 [2024-07-21 11:58:51.372499] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.193 11:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:53.193 11:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:16:53.193 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:53.451 [2024-07-21 11:58:52.279309] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.451 [2024-07-21 11:58:52.279414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.451 [2024-07-21 11:58:52.279429] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:53.451 [2024-07-21 11:58:52.279452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:53.451 [2024-07-21 11:58:52.279463] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:53.451 [2024-07-21 11:58:52.279503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.451 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.014 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.014 "name": "Existed_Raid", 00:16:54.014 "uuid": "0758f528-d3c2-4ba6-97cd-a33666e7e639", 00:16:54.015 "strip_size_kb": 64, 00:16:54.015 "state": "configuring", 00:16:54.015 "raid_level": "raid0", 00:16:54.015 "superblock": true, 00:16:54.015 "num_base_bdevs": 3, 00:16:54.015 "num_base_bdevs_discovered": 0, 00:16:54.015 "num_base_bdevs_operational": 3, 00:16:54.015 "base_bdevs_list": [ 00:16:54.015 { 00:16:54.015 "name": "BaseBdev1", 00:16:54.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.015 "is_configured": false, 00:16:54.015 "data_offset": 0, 00:16:54.015 "data_size": 0 00:16:54.015 }, 00:16:54.015 { 00:16:54.015 "name": "BaseBdev2", 00:16:54.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.015 "is_configured": false, 00:16:54.015 "data_offset": 0, 00:16:54.015 "data_size": 0 00:16:54.015 }, 00:16:54.015 { 00:16:54.015 "name": "BaseBdev3", 00:16:54.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.015 "is_configured": false, 00:16:54.015 "data_offset": 0, 00:16:54.015 "data_size": 0 00:16:54.015 } 00:16:54.015 ] 00:16:54.015 }' 00:16:54.015 11:58:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.015 11:58:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.579 11:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:54.579 [2024-07-21 11:58:53.419387] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:54.579 [2024-07-21 11:58:53.419449] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:16:54.579 11:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.836 [2024-07-21 11:58:53.699455] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:54.836 [2024-07-21 11:58:53.699571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:54.836 [2024-07-21 11:58:53.699595] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.836 [2024-07-21 11:58:53.699624] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.836 [2024-07-21 11:58:53.699633] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.836 [2024-07-21 11:58:53.699659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.094 11:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:55.353 [2024-07-21 11:58:53.986332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:55.353 BaseBdev1 00:16:55.353 11:58:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:55.353 11:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:16:55.353 11:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:55.353 11:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:55.353 11:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:55.353 11:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:55.353 11:58:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:55.611 11:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:55.868 [ 00:16:55.868 { 00:16:55.868 "name": "BaseBdev1", 00:16:55.868 "aliases": [ 00:16:55.868 "c3e9f048-28e5-4fdd-98fe-87b45a98d125" 00:16:55.868 ], 00:16:55.868 "product_name": "Malloc disk", 00:16:55.868 "block_size": 512, 00:16:55.868 "num_blocks": 65536, 00:16:55.868 "uuid": "c3e9f048-28e5-4fdd-98fe-87b45a98d125", 00:16:55.868 "assigned_rate_limits": { 00:16:55.868 "rw_ios_per_sec": 0, 00:16:55.868 "rw_mbytes_per_sec": 0, 00:16:55.868 "r_mbytes_per_sec": 0, 00:16:55.868 "w_mbytes_per_sec": 0 00:16:55.868 }, 00:16:55.868 "claimed": true, 00:16:55.868 "claim_type": "exclusive_write", 00:16:55.868 "zoned": false, 00:16:55.868 "supported_io_types": { 00:16:55.868 "read": true, 00:16:55.868 "write": true, 00:16:55.868 "unmap": true, 00:16:55.868 "write_zeroes": true, 00:16:55.868 "flush": true, 00:16:55.868 "reset": true, 00:16:55.868 "compare": false, 00:16:55.868 "compare_and_write": false, 00:16:55.868 "abort": true, 00:16:55.868 "nvme_admin": false, 00:16:55.868 "nvme_io": false 00:16:55.868 }, 00:16:55.868 "memory_domains": [ 00:16:55.868 { 00:16:55.868 "dma_device_id": "system", 00:16:55.868 "dma_device_type": 1 00:16:55.868 }, 00:16:55.868 { 00:16:55.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:55.868 "dma_device_type": 2 00:16:55.868 } 00:16:55.868 ], 00:16:55.868 "driver_specific": {} 00:16:55.868 } 00:16:55.868 ] 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.868 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.126 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:56.126 "name": "Existed_Raid", 00:16:56.126 "uuid": "b9ac862f-5630-4bca-8da6-4bf2ab2d1956", 00:16:56.126 "strip_size_kb": 64, 00:16:56.126 "state": "configuring", 00:16:56.126 "raid_level": "raid0", 00:16:56.126 "superblock": true, 00:16:56.126 "num_base_bdevs": 3, 00:16:56.126 "num_base_bdevs_discovered": 1, 00:16:56.126 "num_base_bdevs_operational": 3, 00:16:56.126 "base_bdevs_list": [ 00:16:56.126 { 00:16:56.126 "name": "BaseBdev1", 00:16:56.126 "uuid": "c3e9f048-28e5-4fdd-98fe-87b45a98d125", 00:16:56.126 "is_configured": true, 00:16:56.126 "data_offset": 2048, 00:16:56.126 "data_size": 63488 00:16:56.126 }, 00:16:56.126 { 00:16:56.126 "name": "BaseBdev2", 00:16:56.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.126 "is_configured": false, 00:16:56.126 "data_offset": 0, 00:16:56.126 "data_size": 0 00:16:56.126 }, 00:16:56.126 { 00:16:56.126 "name": "BaseBdev3", 00:16:56.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.126 "is_configured": false, 00:16:56.126 "data_offset": 0, 00:16:56.126 "data_size": 0 00:16:56.126 } 00:16:56.126 ] 00:16:56.126 }' 00:16:56.126 11:58:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:56.126 11:58:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.692 11:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:56.951 [2024-07-21 11:58:55.730858] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.951 [2024-07-21 11:58:55.730964] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:56.951 11:58:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:57.209 [2024-07-21 11:58:56.015158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:57.210 [2024-07-21 11:58:56.017488] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:57.210 [2024-07-21 11:58:56.017614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:57.210 [2024-07-21 11:58:56.017633] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:57.210 [2024-07-21 11:58:56.017687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.210 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.467 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:57.467 "name": "Existed_Raid", 00:16:57.467 "uuid": "e237fca1-6c05-465e-8238-41df96854701", 00:16:57.467 "strip_size_kb": 64, 00:16:57.467 "state": "configuring", 00:16:57.467 "raid_level": "raid0", 00:16:57.467 "superblock": true, 00:16:57.467 "num_base_bdevs": 3, 00:16:57.467 "num_base_bdevs_discovered": 1, 00:16:57.467 "num_base_bdevs_operational": 3, 00:16:57.467 "base_bdevs_list": [ 00:16:57.467 { 00:16:57.467 "name": "BaseBdev1", 00:16:57.467 "uuid": "c3e9f048-28e5-4fdd-98fe-87b45a98d125", 00:16:57.467 "is_configured": true, 00:16:57.467 "data_offset": 2048, 00:16:57.467 "data_size": 63488 00:16:57.467 }, 00:16:57.467 { 00:16:57.467 "name": "BaseBdev2", 00:16:57.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.467 "is_configured": false, 00:16:57.467 "data_offset": 0, 00:16:57.467 "data_size": 0 00:16:57.467 }, 00:16:57.467 { 00:16:57.467 "name": "BaseBdev3", 00:16:57.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.467 "is_configured": false, 00:16:57.467 "data_offset": 0, 00:16:57.467 "data_size": 0 00:16:57.467 } 00:16:57.467 ] 00:16:57.467 }' 00:16:57.467 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:57.467 11:58:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.400 11:58:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:58.400 [2024-07-21 11:58:57.214564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:58.400 BaseBdev2 00:16:58.400 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:58.400 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:16:58.400 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:58.400 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:58.400 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:58.400 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:58.400 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:58.657 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:58.915 [ 00:16:58.915 { 00:16:58.915 "name": "BaseBdev2", 00:16:58.915 "aliases": [ 00:16:58.915 "4dd1a83a-e929-433f-9512-d743135b2530" 00:16:58.915 ], 00:16:58.915 "product_name": "Malloc disk", 00:16:58.915 "block_size": 512, 00:16:58.915 "num_blocks": 65536, 00:16:58.915 "uuid": "4dd1a83a-e929-433f-9512-d743135b2530", 00:16:58.915 "assigned_rate_limits": { 00:16:58.915 "rw_ios_per_sec": 0, 00:16:58.915 "rw_mbytes_per_sec": 0, 00:16:58.915 "r_mbytes_per_sec": 0, 00:16:58.915 "w_mbytes_per_sec": 0 00:16:58.915 }, 00:16:58.915 "claimed": true, 00:16:58.915 "claim_type": "exclusive_write", 00:16:58.915 "zoned": false, 00:16:58.915 "supported_io_types": { 00:16:58.915 "read": true, 00:16:58.915 "write": true, 00:16:58.915 "unmap": true, 00:16:58.915 "write_zeroes": true, 00:16:58.915 "flush": true, 00:16:58.915 "reset": true, 00:16:58.915 "compare": false, 00:16:58.915 "compare_and_write": false, 00:16:58.915 "abort": true, 00:16:58.915 "nvme_admin": false, 00:16:58.915 "nvme_io": false 00:16:58.915 }, 00:16:58.915 "memory_domains": [ 00:16:58.915 { 00:16:58.915 "dma_device_id": "system", 00:16:58.915 "dma_device_type": 1 00:16:58.915 }, 00:16:58.915 { 00:16:58.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.915 "dma_device_type": 2 00:16:58.915 } 00:16:58.915 ], 00:16:58.915 "driver_specific": {} 00:16:58.915 } 00:16:58.915 ] 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.915 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.173 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.173 "name": "Existed_Raid", 00:16:59.173 "uuid": "e237fca1-6c05-465e-8238-41df96854701", 00:16:59.173 "strip_size_kb": 64, 00:16:59.173 "state": "configuring", 00:16:59.173 "raid_level": "raid0", 00:16:59.173 "superblock": true, 00:16:59.173 "num_base_bdevs": 3, 00:16:59.173 "num_base_bdevs_discovered": 2, 00:16:59.173 "num_base_bdevs_operational": 3, 00:16:59.173 "base_bdevs_list": [ 00:16:59.173 { 00:16:59.173 "name": "BaseBdev1", 00:16:59.173 "uuid": "c3e9f048-28e5-4fdd-98fe-87b45a98d125", 00:16:59.173 "is_configured": true, 00:16:59.173 "data_offset": 2048, 00:16:59.173 "data_size": 63488 00:16:59.173 }, 00:16:59.173 { 00:16:59.173 "name": "BaseBdev2", 00:16:59.173 "uuid": "4dd1a83a-e929-433f-9512-d743135b2530", 00:16:59.173 "is_configured": true, 00:16:59.173 "data_offset": 2048, 00:16:59.173 "data_size": 63488 00:16:59.173 }, 00:16:59.173 { 00:16:59.173 "name": "BaseBdev3", 00:16:59.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.173 "is_configured": false, 00:16:59.173 "data_offset": 0, 00:16:59.173 "data_size": 0 00:16:59.173 } 00:16:59.173 ] 00:16:59.173 }' 00:16:59.173 11:58:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.173 11:58:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.739 11:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:59.997 [2024-07-21 11:58:58.835980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.997 [2024-07-21 11:58:58.836256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:16:59.997 [2024-07-21 11:58:58.836272] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:59.997 [2024-07-21 11:58:58.836440] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:59.997 [2024-07-21 11:58:58.836922] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:16:59.997 [2024-07-21 11:58:58.836949] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:16:59.997 BaseBdev3 00:16:59.997 [2024-07-21 11:58:58.837192] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.997 11:58:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:59.997 11:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:16:59.997 11:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:16:59.997 11:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:16:59.997 11:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:16:59.997 11:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:16:59.997 11:58:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:00.254 11:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.512 [ 00:17:00.512 { 00:17:00.512 "name": "BaseBdev3", 00:17:00.512 "aliases": [ 00:17:00.512 "b568fcf0-258b-4c89-86ab-ea1fed381d39" 00:17:00.512 ], 00:17:00.512 "product_name": "Malloc disk", 00:17:00.512 "block_size": 512, 00:17:00.512 "num_blocks": 65536, 00:17:00.512 "uuid": "b568fcf0-258b-4c89-86ab-ea1fed381d39", 00:17:00.512 "assigned_rate_limits": { 00:17:00.512 "rw_ios_per_sec": 0, 00:17:00.512 "rw_mbytes_per_sec": 0, 00:17:00.512 "r_mbytes_per_sec": 0, 00:17:00.512 "w_mbytes_per_sec": 0 00:17:00.512 }, 00:17:00.512 "claimed": true, 00:17:00.512 "claim_type": "exclusive_write", 00:17:00.512 "zoned": false, 00:17:00.512 "supported_io_types": { 00:17:00.512 "read": true, 00:17:00.512 "write": true, 00:17:00.512 "unmap": true, 00:17:00.512 "write_zeroes": true, 00:17:00.512 "flush": true, 00:17:00.512 "reset": true, 00:17:00.512 "compare": false, 00:17:00.512 "compare_and_write": false, 00:17:00.512 "abort": true, 00:17:00.512 "nvme_admin": false, 00:17:00.512 "nvme_io": false 00:17:00.512 }, 00:17:00.512 "memory_domains": [ 00:17:00.512 { 00:17:00.512 "dma_device_id": "system", 00:17:00.512 "dma_device_type": 1 00:17:00.512 }, 00:17:00.512 { 00:17:00.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.512 "dma_device_type": 2 00:17:00.512 } 00:17:00.512 ], 00:17:00.512 "driver_specific": {} 00:17:00.512 } 00:17:00.512 ] 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.512 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.769 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:00.769 "name": "Existed_Raid", 00:17:00.769 "uuid": "e237fca1-6c05-465e-8238-41df96854701", 00:17:00.769 "strip_size_kb": 64, 00:17:00.769 "state": "online", 00:17:00.769 "raid_level": "raid0", 00:17:00.769 "superblock": true, 00:17:00.769 "num_base_bdevs": 3, 00:17:00.769 "num_base_bdevs_discovered": 3, 00:17:00.769 "num_base_bdevs_operational": 3, 00:17:00.769 "base_bdevs_list": [ 00:17:00.769 { 00:17:00.769 "name": "BaseBdev1", 00:17:00.769 "uuid": "c3e9f048-28e5-4fdd-98fe-87b45a98d125", 00:17:00.769 "is_configured": true, 00:17:00.769 "data_offset": 2048, 00:17:00.769 "data_size": 63488 00:17:00.769 }, 00:17:00.769 { 00:17:00.769 "name": "BaseBdev2", 00:17:00.769 "uuid": "4dd1a83a-e929-433f-9512-d743135b2530", 00:17:00.769 "is_configured": true, 00:17:00.769 "data_offset": 2048, 00:17:00.769 "data_size": 63488 00:17:00.769 }, 00:17:00.769 { 00:17:00.769 "name": "BaseBdev3", 00:17:00.769 "uuid": "b568fcf0-258b-4c89-86ab-ea1fed381d39", 00:17:00.769 "is_configured": true, 00:17:00.769 "data_offset": 2048, 00:17:00.769 "data_size": 63488 00:17:00.769 } 00:17:00.769 ] 00:17:00.769 }' 00:17:00.769 11:58:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:00.769 11:58:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.346 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:01.346 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:01.604 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:01.605 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:01.605 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:01.605 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:01.605 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:01.605 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:01.605 [2024-07-21 11:59:00.424731] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:01.605 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:01.605 "name": "Existed_Raid", 00:17:01.605 "aliases": [ 00:17:01.605 "e237fca1-6c05-465e-8238-41df96854701" 00:17:01.605 ], 00:17:01.605 "product_name": "Raid Volume", 00:17:01.605 "block_size": 512, 00:17:01.605 "num_blocks": 190464, 00:17:01.605 "uuid": "e237fca1-6c05-465e-8238-41df96854701", 00:17:01.605 "assigned_rate_limits": { 00:17:01.605 "rw_ios_per_sec": 0, 00:17:01.605 "rw_mbytes_per_sec": 0, 00:17:01.605 "r_mbytes_per_sec": 0, 00:17:01.605 "w_mbytes_per_sec": 0 00:17:01.605 }, 00:17:01.605 "claimed": false, 00:17:01.605 "zoned": false, 00:17:01.605 "supported_io_types": { 00:17:01.605 "read": true, 00:17:01.605 "write": true, 00:17:01.605 "unmap": true, 00:17:01.605 "write_zeroes": true, 00:17:01.605 "flush": true, 00:17:01.605 "reset": true, 00:17:01.605 "compare": false, 00:17:01.605 "compare_and_write": false, 00:17:01.605 "abort": false, 00:17:01.605 "nvme_admin": false, 00:17:01.605 "nvme_io": false 00:17:01.605 }, 00:17:01.605 "memory_domains": [ 00:17:01.605 { 00:17:01.605 "dma_device_id": "system", 00:17:01.605 "dma_device_type": 1 00:17:01.605 }, 00:17:01.605 { 00:17:01.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.605 "dma_device_type": 2 00:17:01.605 }, 00:17:01.605 { 00:17:01.605 "dma_device_id": "system", 00:17:01.605 "dma_device_type": 1 00:17:01.605 }, 00:17:01.605 { 00:17:01.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.605 "dma_device_type": 2 00:17:01.605 }, 00:17:01.605 { 00:17:01.605 "dma_device_id": "system", 00:17:01.605 "dma_device_type": 1 00:17:01.605 }, 00:17:01.605 { 00:17:01.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.605 "dma_device_type": 2 00:17:01.605 } 00:17:01.605 ], 00:17:01.605 "driver_specific": { 00:17:01.605 "raid": { 00:17:01.605 "uuid": "e237fca1-6c05-465e-8238-41df96854701", 00:17:01.605 "strip_size_kb": 64, 00:17:01.605 "state": "online", 00:17:01.605 "raid_level": "raid0", 00:17:01.605 "superblock": true, 00:17:01.605 "num_base_bdevs": 3, 00:17:01.605 "num_base_bdevs_discovered": 3, 00:17:01.605 "num_base_bdevs_operational": 3, 00:17:01.605 "base_bdevs_list": [ 00:17:01.605 { 00:17:01.605 "name": "BaseBdev1", 00:17:01.605 "uuid": "c3e9f048-28e5-4fdd-98fe-87b45a98d125", 00:17:01.605 "is_configured": true, 00:17:01.605 "data_offset": 2048, 00:17:01.605 "data_size": 63488 00:17:01.605 }, 00:17:01.605 { 00:17:01.605 "name": "BaseBdev2", 00:17:01.605 "uuid": "4dd1a83a-e929-433f-9512-d743135b2530", 00:17:01.605 "is_configured": true, 00:17:01.605 "data_offset": 2048, 00:17:01.605 "data_size": 63488 00:17:01.605 }, 00:17:01.605 { 00:17:01.605 "name": "BaseBdev3", 00:17:01.605 "uuid": "b568fcf0-258b-4c89-86ab-ea1fed381d39", 00:17:01.605 "is_configured": true, 00:17:01.605 "data_offset": 2048, 00:17:01.605 "data_size": 63488 00:17:01.605 } 00:17:01.605 ] 00:17:01.605 } 00:17:01.605 } 00:17:01.605 }' 00:17:01.605 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:01.863 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:01.863 BaseBdev2 00:17:01.863 BaseBdev3' 00:17:01.863 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:01.863 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:01.863 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:02.120 "name": "BaseBdev1", 00:17:02.120 "aliases": [ 00:17:02.120 "c3e9f048-28e5-4fdd-98fe-87b45a98d125" 00:17:02.120 ], 00:17:02.120 "product_name": "Malloc disk", 00:17:02.120 "block_size": 512, 00:17:02.120 "num_blocks": 65536, 00:17:02.120 "uuid": "c3e9f048-28e5-4fdd-98fe-87b45a98d125", 00:17:02.120 "assigned_rate_limits": { 00:17:02.120 "rw_ios_per_sec": 0, 00:17:02.120 "rw_mbytes_per_sec": 0, 00:17:02.120 "r_mbytes_per_sec": 0, 00:17:02.120 "w_mbytes_per_sec": 0 00:17:02.120 }, 00:17:02.120 "claimed": true, 00:17:02.120 "claim_type": "exclusive_write", 00:17:02.120 "zoned": false, 00:17:02.120 "supported_io_types": { 00:17:02.120 "read": true, 00:17:02.120 "write": true, 00:17:02.120 "unmap": true, 00:17:02.120 "write_zeroes": true, 00:17:02.120 "flush": true, 00:17:02.120 "reset": true, 00:17:02.120 "compare": false, 00:17:02.120 "compare_and_write": false, 00:17:02.120 "abort": true, 00:17:02.120 "nvme_admin": false, 00:17:02.120 "nvme_io": false 00:17:02.120 }, 00:17:02.120 "memory_domains": [ 00:17:02.120 { 00:17:02.120 "dma_device_id": "system", 00:17:02.120 "dma_device_type": 1 00:17:02.120 }, 00:17:02.120 { 00:17:02.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.120 "dma_device_type": 2 00:17:02.120 } 00:17:02.120 ], 00:17:02.120 "driver_specific": {} 00:17:02.120 }' 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.120 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.378 11:59:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.378 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.378 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.378 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.378 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:02.378 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:02.378 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:02.378 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:02.636 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:02.636 "name": "BaseBdev2", 00:17:02.636 "aliases": [ 00:17:02.636 "4dd1a83a-e929-433f-9512-d743135b2530" 00:17:02.636 ], 00:17:02.636 "product_name": "Malloc disk", 00:17:02.636 "block_size": 512, 00:17:02.636 "num_blocks": 65536, 00:17:02.636 "uuid": "4dd1a83a-e929-433f-9512-d743135b2530", 00:17:02.636 "assigned_rate_limits": { 00:17:02.636 "rw_ios_per_sec": 0, 00:17:02.636 "rw_mbytes_per_sec": 0, 00:17:02.636 "r_mbytes_per_sec": 0, 00:17:02.636 "w_mbytes_per_sec": 0 00:17:02.636 }, 00:17:02.636 "claimed": true, 00:17:02.636 "claim_type": "exclusive_write", 00:17:02.636 "zoned": false, 00:17:02.636 "supported_io_types": { 00:17:02.636 "read": true, 00:17:02.636 "write": true, 00:17:02.636 "unmap": true, 00:17:02.636 "write_zeroes": true, 00:17:02.636 "flush": true, 00:17:02.636 "reset": true, 00:17:02.636 "compare": false, 00:17:02.636 "compare_and_write": false, 00:17:02.636 "abort": true, 00:17:02.636 "nvme_admin": false, 00:17:02.636 "nvme_io": false 00:17:02.636 }, 00:17:02.636 "memory_domains": [ 00:17:02.636 { 00:17:02.636 "dma_device_id": "system", 00:17:02.636 "dma_device_type": 1 00:17:02.636 }, 00:17:02.636 { 00:17:02.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.636 "dma_device_type": 2 00:17:02.636 } 00:17:02.636 ], 00:17:02.636 "driver_specific": {} 00:17:02.636 }' 00:17:02.636 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.636 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:02.894 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.153 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.153 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:03.153 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:03.153 11:59:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:03.411 "name": "BaseBdev3", 00:17:03.411 "aliases": [ 00:17:03.411 "b568fcf0-258b-4c89-86ab-ea1fed381d39" 00:17:03.411 ], 00:17:03.411 "product_name": "Malloc disk", 00:17:03.411 "block_size": 512, 00:17:03.411 "num_blocks": 65536, 00:17:03.411 "uuid": "b568fcf0-258b-4c89-86ab-ea1fed381d39", 00:17:03.411 "assigned_rate_limits": { 00:17:03.411 "rw_ios_per_sec": 0, 00:17:03.411 "rw_mbytes_per_sec": 0, 00:17:03.411 "r_mbytes_per_sec": 0, 00:17:03.411 "w_mbytes_per_sec": 0 00:17:03.411 }, 00:17:03.411 "claimed": true, 00:17:03.411 "claim_type": "exclusive_write", 00:17:03.411 "zoned": false, 00:17:03.411 "supported_io_types": { 00:17:03.411 "read": true, 00:17:03.411 "write": true, 00:17:03.411 "unmap": true, 00:17:03.411 "write_zeroes": true, 00:17:03.411 "flush": true, 00:17:03.411 "reset": true, 00:17:03.411 "compare": false, 00:17:03.411 "compare_and_write": false, 00:17:03.411 "abort": true, 00:17:03.411 "nvme_admin": false, 00:17:03.411 "nvme_io": false 00:17:03.411 }, 00:17:03.411 "memory_domains": [ 00:17:03.411 { 00:17:03.411 "dma_device_id": "system", 00:17:03.411 "dma_device_type": 1 00:17:03.411 }, 00:17:03.411 { 00:17:03.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.411 "dma_device_type": 2 00:17:03.411 } 00:17:03.411 ], 00:17:03.411 "driver_specific": {} 00:17:03.411 }' 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:03.411 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.670 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:03.670 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:03.670 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.670 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:03.670 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:03.670 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:03.928 [2024-07-21 11:59:02.721046] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.928 [2024-07-21 11:59:02.721089] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.928 [2024-07-21 11:59:02.721194] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.928 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:03.928 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:17:03.928 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.929 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.187 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:04.187 "name": "Existed_Raid", 00:17:04.187 "uuid": "e237fca1-6c05-465e-8238-41df96854701", 00:17:04.187 "strip_size_kb": 64, 00:17:04.187 "state": "offline", 00:17:04.187 "raid_level": "raid0", 00:17:04.187 "superblock": true, 00:17:04.187 "num_base_bdevs": 3, 00:17:04.187 "num_base_bdevs_discovered": 2, 00:17:04.187 "num_base_bdevs_operational": 2, 00:17:04.187 "base_bdevs_list": [ 00:17:04.187 { 00:17:04.187 "name": null, 00:17:04.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.187 "is_configured": false, 00:17:04.187 "data_offset": 2048, 00:17:04.187 "data_size": 63488 00:17:04.187 }, 00:17:04.187 { 00:17:04.187 "name": "BaseBdev2", 00:17:04.187 "uuid": "4dd1a83a-e929-433f-9512-d743135b2530", 00:17:04.187 "is_configured": true, 00:17:04.187 "data_offset": 2048, 00:17:04.187 "data_size": 63488 00:17:04.187 }, 00:17:04.187 { 00:17:04.187 "name": "BaseBdev3", 00:17:04.187 "uuid": "b568fcf0-258b-4c89-86ab-ea1fed381d39", 00:17:04.187 "is_configured": true, 00:17:04.187 "data_offset": 2048, 00:17:04.187 "data_size": 63488 00:17:04.187 } 00:17:04.187 ] 00:17:04.187 }' 00:17:04.187 11:59:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:04.187 11:59:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.120 11:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:05.120 11:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:05.120 11:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.120 11:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:05.120 11:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:05.120 11:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.121 11:59:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:05.378 [2024-07-21 11:59:04.178995] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.378 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:05.378 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:05.379 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.379 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:05.636 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:05.636 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.636 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:05.893 [2024-07-21 11:59:04.677623] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.893 [2024-07-21 11:59:04.677714] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:17:05.893 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:05.893 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:05.893 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.893 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.151 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:06.151 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:06.151 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:17:06.151 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:17:06.151 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:06.151 11:59:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:06.407 BaseBdev2 00:17:06.407 11:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:17:06.407 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:06.407 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:06.407 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:06.407 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:06.407 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:06.407 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:06.665 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:06.923 [ 00:17:06.923 { 00:17:06.923 "name": "BaseBdev2", 00:17:06.923 "aliases": [ 00:17:06.923 "88104ecb-05f2-4514-942f-3a61e01b9ae6" 00:17:06.923 ], 00:17:06.923 "product_name": "Malloc disk", 00:17:06.923 "block_size": 512, 00:17:06.923 "num_blocks": 65536, 00:17:06.923 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:06.923 "assigned_rate_limits": { 00:17:06.923 "rw_ios_per_sec": 0, 00:17:06.923 "rw_mbytes_per_sec": 0, 00:17:06.923 "r_mbytes_per_sec": 0, 00:17:06.923 "w_mbytes_per_sec": 0 00:17:06.923 }, 00:17:06.923 "claimed": false, 00:17:06.923 "zoned": false, 00:17:06.923 "supported_io_types": { 00:17:06.923 "read": true, 00:17:06.923 "write": true, 00:17:06.923 "unmap": true, 00:17:06.923 "write_zeroes": true, 00:17:06.923 "flush": true, 00:17:06.923 "reset": true, 00:17:06.923 "compare": false, 00:17:06.923 "compare_and_write": false, 00:17:06.923 "abort": true, 00:17:06.923 "nvme_admin": false, 00:17:06.923 "nvme_io": false 00:17:06.923 }, 00:17:06.923 "memory_domains": [ 00:17:06.923 { 00:17:06.923 "dma_device_id": "system", 00:17:06.923 "dma_device_type": 1 00:17:06.923 }, 00:17:06.923 { 00:17:06.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.923 "dma_device_type": 2 00:17:06.923 } 00:17:06.923 ], 00:17:06.923 "driver_specific": {} 00:17:06.923 } 00:17:06.923 ] 00:17:06.923 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:06.923 11:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:06.923 11:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:06.923 11:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:07.181 BaseBdev3 00:17:07.181 11:59:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:17:07.181 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:07.181 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:07.181 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:07.181 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:07.181 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:07.181 11:59:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.440 11:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:07.697 [ 00:17:07.697 { 00:17:07.697 "name": "BaseBdev3", 00:17:07.697 "aliases": [ 00:17:07.697 "9153586d-3fa0-4a22-9e50-3a173bb23e35" 00:17:07.697 ], 00:17:07.697 "product_name": "Malloc disk", 00:17:07.697 "block_size": 512, 00:17:07.697 "num_blocks": 65536, 00:17:07.697 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:07.697 "assigned_rate_limits": { 00:17:07.697 "rw_ios_per_sec": 0, 00:17:07.697 "rw_mbytes_per_sec": 0, 00:17:07.697 "r_mbytes_per_sec": 0, 00:17:07.697 "w_mbytes_per_sec": 0 00:17:07.697 }, 00:17:07.697 "claimed": false, 00:17:07.697 "zoned": false, 00:17:07.697 "supported_io_types": { 00:17:07.697 "read": true, 00:17:07.697 "write": true, 00:17:07.697 "unmap": true, 00:17:07.697 "write_zeroes": true, 00:17:07.697 "flush": true, 00:17:07.697 "reset": true, 00:17:07.697 "compare": false, 00:17:07.697 "compare_and_write": false, 00:17:07.697 "abort": true, 00:17:07.697 "nvme_admin": false, 00:17:07.697 "nvme_io": false 00:17:07.697 }, 00:17:07.697 "memory_domains": [ 00:17:07.697 { 00:17:07.697 "dma_device_id": "system", 00:17:07.697 "dma_device_type": 1 00:17:07.697 }, 00:17:07.697 { 00:17:07.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.697 "dma_device_type": 2 00:17:07.697 } 00:17:07.697 ], 00:17:07.697 "driver_specific": {} 00:17:07.697 } 00:17:07.697 ] 00:17:07.697 11:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:07.697 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:17:07.697 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:17:07.697 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:07.955 [2024-07-21 11:59:06.614942] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.955 [2024-07-21 11:59:06.615068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.955 [2024-07-21 11:59:06.615154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.955 [2024-07-21 11:59:06.617421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.955 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.212 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.212 "name": "Existed_Raid", 00:17:08.212 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:08.212 "strip_size_kb": 64, 00:17:08.212 "state": "configuring", 00:17:08.212 "raid_level": "raid0", 00:17:08.212 "superblock": true, 00:17:08.212 "num_base_bdevs": 3, 00:17:08.212 "num_base_bdevs_discovered": 2, 00:17:08.212 "num_base_bdevs_operational": 3, 00:17:08.212 "base_bdevs_list": [ 00:17:08.212 { 00:17:08.212 "name": "BaseBdev1", 00:17:08.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.212 "is_configured": false, 00:17:08.212 "data_offset": 0, 00:17:08.212 "data_size": 0 00:17:08.212 }, 00:17:08.212 { 00:17:08.212 "name": "BaseBdev2", 00:17:08.212 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:08.212 "is_configured": true, 00:17:08.212 "data_offset": 2048, 00:17:08.212 "data_size": 63488 00:17:08.212 }, 00:17:08.212 { 00:17:08.212 "name": "BaseBdev3", 00:17:08.212 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:08.212 "is_configured": true, 00:17:08.212 "data_offset": 2048, 00:17:08.212 "data_size": 63488 00:17:08.212 } 00:17:08.212 ] 00:17:08.212 }' 00:17:08.212 11:59:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.212 11:59:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.777 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:17:09.034 [2024-07-21 11:59:07.707161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.034 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.292 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.292 "name": "Existed_Raid", 00:17:09.292 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:09.292 "strip_size_kb": 64, 00:17:09.292 "state": "configuring", 00:17:09.292 "raid_level": "raid0", 00:17:09.292 "superblock": true, 00:17:09.292 "num_base_bdevs": 3, 00:17:09.292 "num_base_bdevs_discovered": 1, 00:17:09.292 "num_base_bdevs_operational": 3, 00:17:09.292 "base_bdevs_list": [ 00:17:09.292 { 00:17:09.292 "name": "BaseBdev1", 00:17:09.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.292 "is_configured": false, 00:17:09.292 "data_offset": 0, 00:17:09.292 "data_size": 0 00:17:09.292 }, 00:17:09.292 { 00:17:09.292 "name": null, 00:17:09.292 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:09.292 "is_configured": false, 00:17:09.292 "data_offset": 2048, 00:17:09.292 "data_size": 63488 00:17:09.292 }, 00:17:09.292 { 00:17:09.292 "name": "BaseBdev3", 00:17:09.292 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:09.292 "is_configured": true, 00:17:09.292 "data_offset": 2048, 00:17:09.292 "data_size": 63488 00:17:09.292 } 00:17:09.292 ] 00:17:09.292 }' 00:17:09.292 11:59:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.292 11:59:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.855 11:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.855 11:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:10.111 11:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:17:10.111 11:59:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:10.368 [2024-07-21 11:59:09.036155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:10.368 BaseBdev1 00:17:10.368 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:17:10.368 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:10.368 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:10.368 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:10.368 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:10.368 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:10.368 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.625 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:10.625 [ 00:17:10.625 { 00:17:10.625 "name": "BaseBdev1", 00:17:10.625 "aliases": [ 00:17:10.625 "37fd4cfa-b234-438b-b109-89e718be7444" 00:17:10.625 ], 00:17:10.625 "product_name": "Malloc disk", 00:17:10.626 "block_size": 512, 00:17:10.626 "num_blocks": 65536, 00:17:10.626 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:10.626 "assigned_rate_limits": { 00:17:10.626 "rw_ios_per_sec": 0, 00:17:10.626 "rw_mbytes_per_sec": 0, 00:17:10.626 "r_mbytes_per_sec": 0, 00:17:10.626 "w_mbytes_per_sec": 0 00:17:10.626 }, 00:17:10.626 "claimed": true, 00:17:10.626 "claim_type": "exclusive_write", 00:17:10.626 "zoned": false, 00:17:10.626 "supported_io_types": { 00:17:10.626 "read": true, 00:17:10.626 "write": true, 00:17:10.626 "unmap": true, 00:17:10.626 "write_zeroes": true, 00:17:10.626 "flush": true, 00:17:10.626 "reset": true, 00:17:10.626 "compare": false, 00:17:10.626 "compare_and_write": false, 00:17:10.626 "abort": true, 00:17:10.626 "nvme_admin": false, 00:17:10.626 "nvme_io": false 00:17:10.626 }, 00:17:10.626 "memory_domains": [ 00:17:10.626 { 00:17:10.626 "dma_device_id": "system", 00:17:10.626 "dma_device_type": 1 00:17:10.626 }, 00:17:10.626 { 00:17:10.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.626 "dma_device_type": 2 00:17:10.626 } 00:17:10.626 ], 00:17:10.626 "driver_specific": {} 00:17:10.626 } 00:17:10.626 ] 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.883 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.141 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.141 "name": "Existed_Raid", 00:17:11.141 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:11.141 "strip_size_kb": 64, 00:17:11.141 "state": "configuring", 00:17:11.141 "raid_level": "raid0", 00:17:11.141 "superblock": true, 00:17:11.141 "num_base_bdevs": 3, 00:17:11.141 "num_base_bdevs_discovered": 2, 00:17:11.141 "num_base_bdevs_operational": 3, 00:17:11.141 "base_bdevs_list": [ 00:17:11.141 { 00:17:11.141 "name": "BaseBdev1", 00:17:11.141 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:11.141 "is_configured": true, 00:17:11.141 "data_offset": 2048, 00:17:11.141 "data_size": 63488 00:17:11.141 }, 00:17:11.141 { 00:17:11.141 "name": null, 00:17:11.141 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:11.141 "is_configured": false, 00:17:11.141 "data_offset": 2048, 00:17:11.141 "data_size": 63488 00:17:11.141 }, 00:17:11.141 { 00:17:11.141 "name": "BaseBdev3", 00:17:11.141 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:11.141 "is_configured": true, 00:17:11.141 "data_offset": 2048, 00:17:11.142 "data_size": 63488 00:17:11.142 } 00:17:11.142 ] 00:17:11.142 }' 00:17:11.142 11:59:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.142 11:59:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.708 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.708 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:11.973 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:17:11.973 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:17:12.245 [2024-07-21 11:59:10.892680] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.245 11:59:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.503 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:12.503 "name": "Existed_Raid", 00:17:12.503 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:12.503 "strip_size_kb": 64, 00:17:12.503 "state": "configuring", 00:17:12.503 "raid_level": "raid0", 00:17:12.503 "superblock": true, 00:17:12.503 "num_base_bdevs": 3, 00:17:12.503 "num_base_bdevs_discovered": 1, 00:17:12.503 "num_base_bdevs_operational": 3, 00:17:12.503 "base_bdevs_list": [ 00:17:12.503 { 00:17:12.503 "name": "BaseBdev1", 00:17:12.503 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:12.503 "is_configured": true, 00:17:12.503 "data_offset": 2048, 00:17:12.503 "data_size": 63488 00:17:12.503 }, 00:17:12.503 { 00:17:12.503 "name": null, 00:17:12.503 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:12.503 "is_configured": false, 00:17:12.503 "data_offset": 2048, 00:17:12.503 "data_size": 63488 00:17:12.503 }, 00:17:12.503 { 00:17:12.503 "name": null, 00:17:12.503 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:12.503 "is_configured": false, 00:17:12.503 "data_offset": 2048, 00:17:12.503 "data_size": 63488 00:17:12.503 } 00:17:12.503 ] 00:17:12.503 }' 00:17:12.503 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:12.503 11:59:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.070 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:13.070 11:59:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.327 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:17:13.327 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:13.584 [2024-07-21 11:59:12.345033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.584 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.585 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.842 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.842 "name": "Existed_Raid", 00:17:13.842 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:13.842 "strip_size_kb": 64, 00:17:13.842 "state": "configuring", 00:17:13.842 "raid_level": "raid0", 00:17:13.842 "superblock": true, 00:17:13.842 "num_base_bdevs": 3, 00:17:13.842 "num_base_bdevs_discovered": 2, 00:17:13.842 "num_base_bdevs_operational": 3, 00:17:13.842 "base_bdevs_list": [ 00:17:13.842 { 00:17:13.842 "name": "BaseBdev1", 00:17:13.842 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:13.842 "is_configured": true, 00:17:13.842 "data_offset": 2048, 00:17:13.842 "data_size": 63488 00:17:13.842 }, 00:17:13.842 { 00:17:13.842 "name": null, 00:17:13.842 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:13.842 "is_configured": false, 00:17:13.842 "data_offset": 2048, 00:17:13.842 "data_size": 63488 00:17:13.842 }, 00:17:13.842 { 00:17:13.842 "name": "BaseBdev3", 00:17:13.842 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:13.842 "is_configured": true, 00:17:13.842 "data_offset": 2048, 00:17:13.842 "data_size": 63488 00:17:13.842 } 00:17:13.842 ] 00:17:13.842 }' 00:17:13.842 11:59:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.842 11:59:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.775 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.775 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:14.775 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:17:14.775 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:15.033 [2024-07-21 11:59:13.807558] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.033 11:59:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.291 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.291 "name": "Existed_Raid", 00:17:15.291 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:15.291 "strip_size_kb": 64, 00:17:15.291 "state": "configuring", 00:17:15.291 "raid_level": "raid0", 00:17:15.291 "superblock": true, 00:17:15.291 "num_base_bdevs": 3, 00:17:15.291 "num_base_bdevs_discovered": 1, 00:17:15.291 "num_base_bdevs_operational": 3, 00:17:15.291 "base_bdevs_list": [ 00:17:15.291 { 00:17:15.291 "name": null, 00:17:15.291 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:15.291 "is_configured": false, 00:17:15.291 "data_offset": 2048, 00:17:15.291 "data_size": 63488 00:17:15.291 }, 00:17:15.291 { 00:17:15.291 "name": null, 00:17:15.291 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:15.291 "is_configured": false, 00:17:15.291 "data_offset": 2048, 00:17:15.291 "data_size": 63488 00:17:15.291 }, 00:17:15.291 { 00:17:15.291 "name": "BaseBdev3", 00:17:15.291 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:15.291 "is_configured": true, 00:17:15.291 "data_offset": 2048, 00:17:15.291 "data_size": 63488 00:17:15.291 } 00:17:15.291 ] 00:17:15.291 }' 00:17:15.291 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.291 11:59:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.857 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.857 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:16.115 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:17:16.115 11:59:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:16.373 [2024-07-21 11:59:15.208981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.373 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.630 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:16.630 "name": "Existed_Raid", 00:17:16.630 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:16.630 "strip_size_kb": 64, 00:17:16.630 "state": "configuring", 00:17:16.630 "raid_level": "raid0", 00:17:16.630 "superblock": true, 00:17:16.630 "num_base_bdevs": 3, 00:17:16.630 "num_base_bdevs_discovered": 2, 00:17:16.630 "num_base_bdevs_operational": 3, 00:17:16.630 "base_bdevs_list": [ 00:17:16.630 { 00:17:16.630 "name": null, 00:17:16.630 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:16.630 "is_configured": false, 00:17:16.630 "data_offset": 2048, 00:17:16.630 "data_size": 63488 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "name": "BaseBdev2", 00:17:16.630 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:16.630 "is_configured": true, 00:17:16.630 "data_offset": 2048, 00:17:16.630 "data_size": 63488 00:17:16.630 }, 00:17:16.630 { 00:17:16.630 "name": "BaseBdev3", 00:17:16.630 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:16.630 "is_configured": true, 00:17:16.630 "data_offset": 2048, 00:17:16.630 "data_size": 63488 00:17:16.630 } 00:17:16.630 ] 00:17:16.630 }' 00:17:16.630 11:59:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:16.630 11:59:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.562 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.562 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:17.562 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:17:17.562 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.562 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:17.820 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 37fd4cfa-b234-438b-b109-89e718be7444 00:17:18.082 [2024-07-21 11:59:16.914378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:18.082 [2024-07-21 11:59:16.914667] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:18.082 [2024-07-21 11:59:16.914682] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:18.082 [2024-07-21 11:59:16.914775] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:18.082 [2024-07-21 11:59:16.915160] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:18.082 [2024-07-21 11:59:16.915186] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:17:18.082 [2024-07-21 11:59:16.915297] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.082 NewBaseBdev 00:17:18.082 11:59:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:17:18.082 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:17:18.082 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:18.082 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:17:18.082 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:18.082 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:18.082 11:59:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.343 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:18.601 [ 00:17:18.601 { 00:17:18.601 "name": "NewBaseBdev", 00:17:18.601 "aliases": [ 00:17:18.601 "37fd4cfa-b234-438b-b109-89e718be7444" 00:17:18.601 ], 00:17:18.601 "product_name": "Malloc disk", 00:17:18.601 "block_size": 512, 00:17:18.601 "num_blocks": 65536, 00:17:18.601 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:18.601 "assigned_rate_limits": { 00:17:18.601 "rw_ios_per_sec": 0, 00:17:18.601 "rw_mbytes_per_sec": 0, 00:17:18.601 "r_mbytes_per_sec": 0, 00:17:18.601 "w_mbytes_per_sec": 0 00:17:18.601 }, 00:17:18.601 "claimed": true, 00:17:18.601 "claim_type": "exclusive_write", 00:17:18.601 "zoned": false, 00:17:18.601 "supported_io_types": { 00:17:18.601 "read": true, 00:17:18.601 "write": true, 00:17:18.601 "unmap": true, 00:17:18.601 "write_zeroes": true, 00:17:18.601 "flush": true, 00:17:18.601 "reset": true, 00:17:18.601 "compare": false, 00:17:18.601 "compare_and_write": false, 00:17:18.601 "abort": true, 00:17:18.601 "nvme_admin": false, 00:17:18.601 "nvme_io": false 00:17:18.601 }, 00:17:18.601 "memory_domains": [ 00:17:18.601 { 00:17:18.601 "dma_device_id": "system", 00:17:18.601 "dma_device_type": 1 00:17:18.601 }, 00:17:18.601 { 00:17:18.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.601 "dma_device_type": 2 00:17:18.601 } 00:17:18.601 ], 00:17:18.601 "driver_specific": {} 00:17:18.601 } 00:17:18.601 ] 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.601 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.859 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.859 "name": "Existed_Raid", 00:17:18.859 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:18.859 "strip_size_kb": 64, 00:17:18.859 "state": "online", 00:17:18.859 "raid_level": "raid0", 00:17:18.859 "superblock": true, 00:17:18.859 "num_base_bdevs": 3, 00:17:18.859 "num_base_bdevs_discovered": 3, 00:17:18.859 "num_base_bdevs_operational": 3, 00:17:18.859 "base_bdevs_list": [ 00:17:18.859 { 00:17:18.859 "name": "NewBaseBdev", 00:17:18.859 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:18.859 "is_configured": true, 00:17:18.859 "data_offset": 2048, 00:17:18.859 "data_size": 63488 00:17:18.859 }, 00:17:18.859 { 00:17:18.859 "name": "BaseBdev2", 00:17:18.859 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:18.859 "is_configured": true, 00:17:18.859 "data_offset": 2048, 00:17:18.859 "data_size": 63488 00:17:18.859 }, 00:17:18.859 { 00:17:18.859 "name": "BaseBdev3", 00:17:18.859 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:18.859 "is_configured": true, 00:17:18.859 "data_offset": 2048, 00:17:18.859 "data_size": 63488 00:17:18.859 } 00:17:18.859 ] 00:17:18.859 }' 00:17:18.859 11:59:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.859 11:59:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:19.792 [2024-07-21 11:59:18.547612] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.792 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:19.792 "name": "Existed_Raid", 00:17:19.792 "aliases": [ 00:17:19.792 "3e9256df-8e4b-47fa-8496-0364e64c9c51" 00:17:19.792 ], 00:17:19.792 "product_name": "Raid Volume", 00:17:19.792 "block_size": 512, 00:17:19.792 "num_blocks": 190464, 00:17:19.792 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:19.792 "assigned_rate_limits": { 00:17:19.793 "rw_ios_per_sec": 0, 00:17:19.793 "rw_mbytes_per_sec": 0, 00:17:19.793 "r_mbytes_per_sec": 0, 00:17:19.793 "w_mbytes_per_sec": 0 00:17:19.793 }, 00:17:19.793 "claimed": false, 00:17:19.793 "zoned": false, 00:17:19.793 "supported_io_types": { 00:17:19.793 "read": true, 00:17:19.793 "write": true, 00:17:19.793 "unmap": true, 00:17:19.793 "write_zeroes": true, 00:17:19.793 "flush": true, 00:17:19.793 "reset": true, 00:17:19.793 "compare": false, 00:17:19.793 "compare_and_write": false, 00:17:19.793 "abort": false, 00:17:19.793 "nvme_admin": false, 00:17:19.793 "nvme_io": false 00:17:19.793 }, 00:17:19.793 "memory_domains": [ 00:17:19.793 { 00:17:19.793 "dma_device_id": "system", 00:17:19.793 "dma_device_type": 1 00:17:19.793 }, 00:17:19.793 { 00:17:19.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.793 "dma_device_type": 2 00:17:19.793 }, 00:17:19.793 { 00:17:19.793 "dma_device_id": "system", 00:17:19.793 "dma_device_type": 1 00:17:19.793 }, 00:17:19.793 { 00:17:19.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.793 "dma_device_type": 2 00:17:19.793 }, 00:17:19.793 { 00:17:19.793 "dma_device_id": "system", 00:17:19.793 "dma_device_type": 1 00:17:19.793 }, 00:17:19.793 { 00:17:19.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.793 "dma_device_type": 2 00:17:19.793 } 00:17:19.793 ], 00:17:19.793 "driver_specific": { 00:17:19.793 "raid": { 00:17:19.793 "uuid": "3e9256df-8e4b-47fa-8496-0364e64c9c51", 00:17:19.793 "strip_size_kb": 64, 00:17:19.793 "state": "online", 00:17:19.793 "raid_level": "raid0", 00:17:19.793 "superblock": true, 00:17:19.793 "num_base_bdevs": 3, 00:17:19.793 "num_base_bdevs_discovered": 3, 00:17:19.793 "num_base_bdevs_operational": 3, 00:17:19.793 "base_bdevs_list": [ 00:17:19.793 { 00:17:19.793 "name": "NewBaseBdev", 00:17:19.793 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:19.793 "is_configured": true, 00:17:19.793 "data_offset": 2048, 00:17:19.793 "data_size": 63488 00:17:19.793 }, 00:17:19.793 { 00:17:19.793 "name": "BaseBdev2", 00:17:19.793 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:19.793 "is_configured": true, 00:17:19.793 "data_offset": 2048, 00:17:19.793 "data_size": 63488 00:17:19.793 }, 00:17:19.793 { 00:17:19.793 "name": "BaseBdev3", 00:17:19.793 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:19.793 "is_configured": true, 00:17:19.793 "data_offset": 2048, 00:17:19.793 "data_size": 63488 00:17:19.793 } 00:17:19.793 ] 00:17:19.793 } 00:17:19.793 } 00:17:19.793 }' 00:17:19.793 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.793 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:17:19.793 BaseBdev2 00:17:19.793 BaseBdev3' 00:17:19.793 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:19.793 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:17:19.793 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.051 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.051 "name": "NewBaseBdev", 00:17:20.051 "aliases": [ 00:17:20.051 "37fd4cfa-b234-438b-b109-89e718be7444" 00:17:20.051 ], 00:17:20.051 "product_name": "Malloc disk", 00:17:20.051 "block_size": 512, 00:17:20.051 "num_blocks": 65536, 00:17:20.051 "uuid": "37fd4cfa-b234-438b-b109-89e718be7444", 00:17:20.051 "assigned_rate_limits": { 00:17:20.051 "rw_ios_per_sec": 0, 00:17:20.051 "rw_mbytes_per_sec": 0, 00:17:20.051 "r_mbytes_per_sec": 0, 00:17:20.051 "w_mbytes_per_sec": 0 00:17:20.051 }, 00:17:20.051 "claimed": true, 00:17:20.051 "claim_type": "exclusive_write", 00:17:20.051 "zoned": false, 00:17:20.051 "supported_io_types": { 00:17:20.051 "read": true, 00:17:20.051 "write": true, 00:17:20.051 "unmap": true, 00:17:20.051 "write_zeroes": true, 00:17:20.051 "flush": true, 00:17:20.051 "reset": true, 00:17:20.051 "compare": false, 00:17:20.051 "compare_and_write": false, 00:17:20.051 "abort": true, 00:17:20.051 "nvme_admin": false, 00:17:20.051 "nvme_io": false 00:17:20.051 }, 00:17:20.051 "memory_domains": [ 00:17:20.051 { 00:17:20.051 "dma_device_id": "system", 00:17:20.051 "dma_device_type": 1 00:17:20.051 }, 00:17:20.051 { 00:17:20.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.051 "dma_device_type": 2 00:17:20.051 } 00:17:20.051 ], 00:17:20.051 "driver_specific": {} 00:17:20.051 }' 00:17:20.051 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.309 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.309 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:20.309 11:59:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.309 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.309 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:20.309 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.309 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.567 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:20.567 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.567 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.567 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:20.567 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:20.567 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:20.567 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.825 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.825 "name": "BaseBdev2", 00:17:20.825 "aliases": [ 00:17:20.825 "88104ecb-05f2-4514-942f-3a61e01b9ae6" 00:17:20.825 ], 00:17:20.825 "product_name": "Malloc disk", 00:17:20.825 "block_size": 512, 00:17:20.825 "num_blocks": 65536, 00:17:20.825 "uuid": "88104ecb-05f2-4514-942f-3a61e01b9ae6", 00:17:20.825 "assigned_rate_limits": { 00:17:20.825 "rw_ios_per_sec": 0, 00:17:20.825 "rw_mbytes_per_sec": 0, 00:17:20.825 "r_mbytes_per_sec": 0, 00:17:20.825 "w_mbytes_per_sec": 0 00:17:20.825 }, 00:17:20.825 "claimed": true, 00:17:20.825 "claim_type": "exclusive_write", 00:17:20.825 "zoned": false, 00:17:20.825 "supported_io_types": { 00:17:20.825 "read": true, 00:17:20.825 "write": true, 00:17:20.825 "unmap": true, 00:17:20.825 "write_zeroes": true, 00:17:20.825 "flush": true, 00:17:20.825 "reset": true, 00:17:20.825 "compare": false, 00:17:20.825 "compare_and_write": false, 00:17:20.825 "abort": true, 00:17:20.825 "nvme_admin": false, 00:17:20.825 "nvme_io": false 00:17:20.825 }, 00:17:20.825 "memory_domains": [ 00:17:20.825 { 00:17:20.825 "dma_device_id": "system", 00:17:20.825 "dma_device_type": 1 00:17:20.825 }, 00:17:20.825 { 00:17:20.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.825 "dma_device_type": 2 00:17:20.825 } 00:17:20.825 ], 00:17:20.825 "driver_specific": {} 00:17:20.825 }' 00:17:20.825 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.825 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.825 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:20.825 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.825 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:17:21.083 11:59:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:21.341 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:21.341 "name": "BaseBdev3", 00:17:21.341 "aliases": [ 00:17:21.341 "9153586d-3fa0-4a22-9e50-3a173bb23e35" 00:17:21.341 ], 00:17:21.341 "product_name": "Malloc disk", 00:17:21.341 "block_size": 512, 00:17:21.341 "num_blocks": 65536, 00:17:21.341 "uuid": "9153586d-3fa0-4a22-9e50-3a173bb23e35", 00:17:21.341 "assigned_rate_limits": { 00:17:21.341 "rw_ios_per_sec": 0, 00:17:21.341 "rw_mbytes_per_sec": 0, 00:17:21.341 "r_mbytes_per_sec": 0, 00:17:21.341 "w_mbytes_per_sec": 0 00:17:21.341 }, 00:17:21.341 "claimed": true, 00:17:21.341 "claim_type": "exclusive_write", 00:17:21.341 "zoned": false, 00:17:21.341 "supported_io_types": { 00:17:21.341 "read": true, 00:17:21.341 "write": true, 00:17:21.341 "unmap": true, 00:17:21.341 "write_zeroes": true, 00:17:21.341 "flush": true, 00:17:21.341 "reset": true, 00:17:21.341 "compare": false, 00:17:21.341 "compare_and_write": false, 00:17:21.341 "abort": true, 00:17:21.341 "nvme_admin": false, 00:17:21.341 "nvme_io": false 00:17:21.341 }, 00:17:21.341 "memory_domains": [ 00:17:21.341 { 00:17:21.341 "dma_device_id": "system", 00:17:21.341 "dma_device_type": 1 00:17:21.341 }, 00:17:21.341 { 00:17:21.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.341 "dma_device_type": 2 00:17:21.341 } 00:17:21.341 ], 00:17:21.341 "driver_specific": {} 00:17:21.341 }' 00:17:21.341 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.341 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:21.599 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.857 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:21.857 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:21.857 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:22.116 [2024-07-21 11:59:20.759804] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:22.116 [2024-07-21 11:59:20.759855] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.116 [2024-07-21 11:59:20.759981] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.116 [2024-07-21 11:59:20.760065] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.116 [2024-07-21 11:59:20.760077] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 136807 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 136807 ']' 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 136807 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 136807 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 136807' 00:17:22.116 killing process with pid 136807 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 136807 00:17:22.116 [2024-07-21 11:59:20.803293] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.116 11:59:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 136807 00:17:22.116 [2024-07-21 11:59:20.831067] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.375 11:59:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:17:22.375 00:17:22.375 real 0m30.079s 00:17:22.375 user 0m57.254s 00:17:22.375 sys 0m3.624s 00:17:22.375 11:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:22.375 11:59:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:22.375 ************************************ 00:17:22.375 END TEST raid_state_function_test_sb 00:17:22.375 ************************************ 00:17:22.375 11:59:21 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:17:22.375 11:59:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:22.375 11:59:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:22.375 11:59:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.375 ************************************ 00:17:22.375 START TEST raid_superblock_test 00:17:22.375 ************************************ 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 3 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=137783 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 137783 /var/tmp/spdk-raid.sock 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 137783 ']' 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:22.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:22.375 11:59:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.375 [2024-07-21 11:59:21.192462] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:22.375 [2024-07-21 11:59:21.193377] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137783 ] 00:17:22.634 [2024-07-21 11:59:21.365117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.634 [2024-07-21 11:59:21.458165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.892 [2024-07-21 11:59:21.516912] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.457 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:23.715 malloc1 00:17:23.715 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.973 [2024-07-21 11:59:22.696579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.973 [2024-07-21 11:59:22.696739] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.973 [2024-07-21 11:59:22.696800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:17:23.973 [2024-07-21 11:59:22.696846] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.973 [2024-07-21 11:59:22.699595] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.973 [2024-07-21 11:59:22.699671] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.973 pt1 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.973 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:24.230 malloc2 00:17:24.230 11:59:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:24.487 [2024-07-21 11:59:23.152359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:24.487 [2024-07-21 11:59:23.152511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.487 [2024-07-21 11:59:23.152581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:17:24.487 [2024-07-21 11:59:23.152623] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.487 [2024-07-21 11:59:23.155213] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.487 [2024-07-21 11:59:23.155282] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:24.487 pt2 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:24.487 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:24.744 malloc3 00:17:24.744 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:25.001 [2024-07-21 11:59:23.642268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:25.001 [2024-07-21 11:59:23.642405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.001 [2024-07-21 11:59:23.642463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:17:25.001 [2024-07-21 11:59:23.642527] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.001 [2024-07-21 11:59:23.645223] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.001 [2024-07-21 11:59:23.645302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:25.001 pt3 00:17:25.001 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:25.001 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:25.001 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:25.001 [2024-07-21 11:59:23.862396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:25.001 [2024-07-21 11:59:23.864756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.001 [2024-07-21 11:59:23.864852] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:25.001 [2024-07-21 11:59:23.865120] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:17:25.001 [2024-07-21 11:59:23.865147] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:25.001 [2024-07-21 11:59:23.865330] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:25.001 [2024-07-21 11:59:23.865782] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:17:25.001 [2024-07-21 11:59:23.865837] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:17:25.001 [2024-07-21 11:59:23.866049] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.259 11:59:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.259 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.259 "name": "raid_bdev1", 00:17:25.259 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:25.259 "strip_size_kb": 64, 00:17:25.259 "state": "online", 00:17:25.259 "raid_level": "raid0", 00:17:25.259 "superblock": true, 00:17:25.259 "num_base_bdevs": 3, 00:17:25.259 "num_base_bdevs_discovered": 3, 00:17:25.259 "num_base_bdevs_operational": 3, 00:17:25.259 "base_bdevs_list": [ 00:17:25.259 { 00:17:25.259 "name": "pt1", 00:17:25.259 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:25.259 "is_configured": true, 00:17:25.259 "data_offset": 2048, 00:17:25.259 "data_size": 63488 00:17:25.259 }, 00:17:25.259 { 00:17:25.259 "name": "pt2", 00:17:25.259 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:25.259 "is_configured": true, 00:17:25.259 "data_offset": 2048, 00:17:25.259 "data_size": 63488 00:17:25.259 }, 00:17:25.259 { 00:17:25.259 "name": "pt3", 00:17:25.259 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:25.259 "is_configured": true, 00:17:25.259 "data_offset": 2048, 00:17:25.259 "data_size": 63488 00:17:25.259 } 00:17:25.259 ] 00:17:25.259 }' 00:17:25.259 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.259 11:59:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:26.196 11:59:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:26.196 [2024-07-21 11:59:24.998887] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.196 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:26.196 "name": "raid_bdev1", 00:17:26.196 "aliases": [ 00:17:26.196 "cb28d6c5-9cd6-4434-85a0-0b5365359d0a" 00:17:26.196 ], 00:17:26.196 "product_name": "Raid Volume", 00:17:26.196 "block_size": 512, 00:17:26.196 "num_blocks": 190464, 00:17:26.196 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:26.196 "assigned_rate_limits": { 00:17:26.196 "rw_ios_per_sec": 0, 00:17:26.196 "rw_mbytes_per_sec": 0, 00:17:26.196 "r_mbytes_per_sec": 0, 00:17:26.196 "w_mbytes_per_sec": 0 00:17:26.196 }, 00:17:26.196 "claimed": false, 00:17:26.196 "zoned": false, 00:17:26.196 "supported_io_types": { 00:17:26.196 "read": true, 00:17:26.196 "write": true, 00:17:26.196 "unmap": true, 00:17:26.196 "write_zeroes": true, 00:17:26.196 "flush": true, 00:17:26.196 "reset": true, 00:17:26.196 "compare": false, 00:17:26.196 "compare_and_write": false, 00:17:26.196 "abort": false, 00:17:26.196 "nvme_admin": false, 00:17:26.196 "nvme_io": false 00:17:26.196 }, 00:17:26.196 "memory_domains": [ 00:17:26.196 { 00:17:26.196 "dma_device_id": "system", 00:17:26.196 "dma_device_type": 1 00:17:26.196 }, 00:17:26.196 { 00:17:26.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.196 "dma_device_type": 2 00:17:26.196 }, 00:17:26.196 { 00:17:26.196 "dma_device_id": "system", 00:17:26.196 "dma_device_type": 1 00:17:26.196 }, 00:17:26.196 { 00:17:26.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.196 "dma_device_type": 2 00:17:26.196 }, 00:17:26.196 { 00:17:26.196 "dma_device_id": "system", 00:17:26.196 "dma_device_type": 1 00:17:26.196 }, 00:17:26.196 { 00:17:26.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.196 "dma_device_type": 2 00:17:26.196 } 00:17:26.196 ], 00:17:26.196 "driver_specific": { 00:17:26.196 "raid": { 00:17:26.196 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:26.196 "strip_size_kb": 64, 00:17:26.196 "state": "online", 00:17:26.196 "raid_level": "raid0", 00:17:26.196 "superblock": true, 00:17:26.196 "num_base_bdevs": 3, 00:17:26.196 "num_base_bdevs_discovered": 3, 00:17:26.196 "num_base_bdevs_operational": 3, 00:17:26.196 "base_bdevs_list": [ 00:17:26.197 { 00:17:26.197 "name": "pt1", 00:17:26.197 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:26.197 "is_configured": true, 00:17:26.197 "data_offset": 2048, 00:17:26.197 "data_size": 63488 00:17:26.197 }, 00:17:26.197 { 00:17:26.197 "name": "pt2", 00:17:26.197 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:26.197 "is_configured": true, 00:17:26.197 "data_offset": 2048, 00:17:26.197 "data_size": 63488 00:17:26.197 }, 00:17:26.197 { 00:17:26.197 "name": "pt3", 00:17:26.197 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:26.197 "is_configured": true, 00:17:26.197 "data_offset": 2048, 00:17:26.197 "data_size": 63488 00:17:26.197 } 00:17:26.197 ] 00:17:26.197 } 00:17:26.197 } 00:17:26.197 }' 00:17:26.197 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.454 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:26.454 pt2 00:17:26.454 pt3' 00:17:26.454 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.454 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:26.454 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:26.712 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.712 "name": "pt1", 00:17:26.712 "aliases": [ 00:17:26.712 "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f" 00:17:26.712 ], 00:17:26.712 "product_name": "passthru", 00:17:26.712 "block_size": 512, 00:17:26.712 "num_blocks": 65536, 00:17:26.712 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:26.712 "assigned_rate_limits": { 00:17:26.712 "rw_ios_per_sec": 0, 00:17:26.712 "rw_mbytes_per_sec": 0, 00:17:26.712 "r_mbytes_per_sec": 0, 00:17:26.712 "w_mbytes_per_sec": 0 00:17:26.712 }, 00:17:26.712 "claimed": true, 00:17:26.712 "claim_type": "exclusive_write", 00:17:26.712 "zoned": false, 00:17:26.712 "supported_io_types": { 00:17:26.712 "read": true, 00:17:26.712 "write": true, 00:17:26.712 "unmap": true, 00:17:26.712 "write_zeroes": true, 00:17:26.712 "flush": true, 00:17:26.712 "reset": true, 00:17:26.712 "compare": false, 00:17:26.712 "compare_and_write": false, 00:17:26.712 "abort": true, 00:17:26.712 "nvme_admin": false, 00:17:26.712 "nvme_io": false 00:17:26.713 }, 00:17:26.713 "memory_domains": [ 00:17:26.713 { 00:17:26.713 "dma_device_id": "system", 00:17:26.713 "dma_device_type": 1 00:17:26.713 }, 00:17:26.713 { 00:17:26.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.713 "dma_device_type": 2 00:17:26.713 } 00:17:26.713 ], 00:17:26.713 "driver_specific": { 00:17:26.713 "passthru": { 00:17:26.713 "name": "pt1", 00:17:26.713 "base_bdev_name": "malloc1" 00:17:26.713 } 00:17:26.713 } 00:17:26.713 }' 00:17:26.713 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.713 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.713 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:26.713 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.713 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.713 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:26.713 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:26.971 11:59:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.230 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.230 "name": "pt2", 00:17:27.230 "aliases": [ 00:17:27.230 "91849f22-3b7f-564b-bc79-788038d2145f" 00:17:27.230 ], 00:17:27.230 "product_name": "passthru", 00:17:27.230 "block_size": 512, 00:17:27.230 "num_blocks": 65536, 00:17:27.230 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:27.230 "assigned_rate_limits": { 00:17:27.230 "rw_ios_per_sec": 0, 00:17:27.230 "rw_mbytes_per_sec": 0, 00:17:27.230 "r_mbytes_per_sec": 0, 00:17:27.230 "w_mbytes_per_sec": 0 00:17:27.230 }, 00:17:27.230 "claimed": true, 00:17:27.230 "claim_type": "exclusive_write", 00:17:27.230 "zoned": false, 00:17:27.230 "supported_io_types": { 00:17:27.230 "read": true, 00:17:27.230 "write": true, 00:17:27.230 "unmap": true, 00:17:27.230 "write_zeroes": true, 00:17:27.230 "flush": true, 00:17:27.230 "reset": true, 00:17:27.230 "compare": false, 00:17:27.230 "compare_and_write": false, 00:17:27.230 "abort": true, 00:17:27.230 "nvme_admin": false, 00:17:27.230 "nvme_io": false 00:17:27.230 }, 00:17:27.230 "memory_domains": [ 00:17:27.230 { 00:17:27.230 "dma_device_id": "system", 00:17:27.230 "dma_device_type": 1 00:17:27.230 }, 00:17:27.230 { 00:17:27.230 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.230 "dma_device_type": 2 00:17:27.230 } 00:17:27.230 ], 00:17:27.230 "driver_specific": { 00:17:27.230 "passthru": { 00:17:27.230 "name": "pt2", 00:17:27.230 "base_bdev_name": "malloc2" 00:17:27.230 } 00:17:27.230 } 00:17:27.230 }' 00:17:27.230 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.489 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.489 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:27.489 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.489 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.489 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:27.489 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.489 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.748 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:27.748 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.748 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.748 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:27.748 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:27.748 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:27.748 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:28.006 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:28.006 "name": "pt3", 00:17:28.006 "aliases": [ 00:17:28.006 "d45ffd3b-6493-5aed-bcf9-fe33e1742042" 00:17:28.006 ], 00:17:28.006 "product_name": "passthru", 00:17:28.006 "block_size": 512, 00:17:28.006 "num_blocks": 65536, 00:17:28.006 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:28.006 "assigned_rate_limits": { 00:17:28.006 "rw_ios_per_sec": 0, 00:17:28.006 "rw_mbytes_per_sec": 0, 00:17:28.006 "r_mbytes_per_sec": 0, 00:17:28.006 "w_mbytes_per_sec": 0 00:17:28.006 }, 00:17:28.006 "claimed": true, 00:17:28.006 "claim_type": "exclusive_write", 00:17:28.006 "zoned": false, 00:17:28.006 "supported_io_types": { 00:17:28.006 "read": true, 00:17:28.006 "write": true, 00:17:28.006 "unmap": true, 00:17:28.006 "write_zeroes": true, 00:17:28.006 "flush": true, 00:17:28.006 "reset": true, 00:17:28.006 "compare": false, 00:17:28.006 "compare_and_write": false, 00:17:28.006 "abort": true, 00:17:28.006 "nvme_admin": false, 00:17:28.006 "nvme_io": false 00:17:28.006 }, 00:17:28.006 "memory_domains": [ 00:17:28.006 { 00:17:28.006 "dma_device_id": "system", 00:17:28.007 "dma_device_type": 1 00:17:28.007 }, 00:17:28.007 { 00:17:28.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.007 "dma_device_type": 2 00:17:28.007 } 00:17:28.007 ], 00:17:28.007 "driver_specific": { 00:17:28.007 "passthru": { 00:17:28.007 "name": "pt3", 00:17:28.007 "base_bdev_name": "malloc3" 00:17:28.007 } 00:17:28.007 } 00:17:28.007 }' 00:17:28.007 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.007 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:28.007 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:28.007 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.007 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:28.265 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:28.265 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.265 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:28.265 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:28.265 11:59:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.265 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:28.265 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:28.265 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:28.265 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:28.522 [2024-07-21 11:59:27.359581] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:28.522 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=cb28d6c5-9cd6-4434-85a0-0b5365359d0a 00:17:28.522 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z cb28d6c5-9cd6-4434-85a0-0b5365359d0a ']' 00:17:28.522 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:28.781 [2024-07-21 11:59:27.643428] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.781 [2024-07-21 11:59:27.643479] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.781 [2024-07-21 11:59:27.643631] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.781 [2024-07-21 11:59:27.643718] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.781 [2024-07-21 11:59:27.643734] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:17:29.039 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.039 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:29.039 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:29.039 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:29.039 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.039 11:59:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:29.298 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.298 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:29.555 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:29.555 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:29.812 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:29.812 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:30.070 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:30.070 11:59:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:30.070 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:30.071 11:59:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:30.328 [2024-07-21 11:59:29.107749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:30.328 [2024-07-21 11:59:29.109924] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:30.328 [2024-07-21 11:59:29.110009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:30.328 [2024-07-21 11:59:29.110072] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:30.328 [2024-07-21 11:59:29.110186] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:30.328 [2024-07-21 11:59:29.110279] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:30.328 [2024-07-21 11:59:29.110337] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:30.328 [2024-07-21 11:59:29.110350] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:17:30.328 request: 00:17:30.328 { 00:17:30.328 "name": "raid_bdev1", 00:17:30.328 "raid_level": "raid0", 00:17:30.328 "base_bdevs": [ 00:17:30.328 "malloc1", 00:17:30.328 "malloc2", 00:17:30.328 "malloc3" 00:17:30.328 ], 00:17:30.328 "superblock": false, 00:17:30.328 "strip_size_kb": 64, 00:17:30.328 "method": "bdev_raid_create", 00:17:30.328 "req_id": 1 00:17:30.328 } 00:17:30.328 Got JSON-RPC error response 00:17:30.328 response: 00:17:30.328 { 00:17:30.328 "code": -17, 00:17:30.328 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:30.328 } 00:17:30.328 11:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:17:30.328 11:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:30.328 11:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:30.328 11:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:30.328 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:30.328 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.587 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:30.587 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:30.587 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:30.845 [2024-07-21 11:59:29.603227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:30.845 [2024-07-21 11:59:29.603373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.845 [2024-07-21 11:59:29.603421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:30.845 [2024-07-21 11:59:29.603446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.845 [2024-07-21 11:59:29.605962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.845 [2024-07-21 11:59:29.606033] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:30.845 [2024-07-21 11:59:29.606168] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:30.845 [2024-07-21 11:59:29.606257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:30.845 pt1 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.845 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.103 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.103 "name": "raid_bdev1", 00:17:31.103 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:31.103 "strip_size_kb": 64, 00:17:31.103 "state": "configuring", 00:17:31.103 "raid_level": "raid0", 00:17:31.103 "superblock": true, 00:17:31.103 "num_base_bdevs": 3, 00:17:31.103 "num_base_bdevs_discovered": 1, 00:17:31.103 "num_base_bdevs_operational": 3, 00:17:31.103 "base_bdevs_list": [ 00:17:31.103 { 00:17:31.103 "name": "pt1", 00:17:31.103 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:31.103 "is_configured": true, 00:17:31.103 "data_offset": 2048, 00:17:31.103 "data_size": 63488 00:17:31.103 }, 00:17:31.103 { 00:17:31.103 "name": null, 00:17:31.103 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:31.103 "is_configured": false, 00:17:31.103 "data_offset": 2048, 00:17:31.103 "data_size": 63488 00:17:31.103 }, 00:17:31.103 { 00:17:31.103 "name": null, 00:17:31.103 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:31.103 "is_configured": false, 00:17:31.103 "data_offset": 2048, 00:17:31.103 "data_size": 63488 00:17:31.103 } 00:17:31.103 ] 00:17:31.103 }' 00:17:31.103 11:59:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.103 11:59:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.036 11:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:17:32.036 11:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:32.036 [2024-07-21 11:59:30.799542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:32.036 [2024-07-21 11:59:30.799693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:32.036 [2024-07-21 11:59:30.799744] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:32.036 [2024-07-21 11:59:30.799769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:32.036 [2024-07-21 11:59:30.800324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:32.036 [2024-07-21 11:59:30.800371] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:32.036 [2024-07-21 11:59:30.800489] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:32.036 [2024-07-21 11:59:30.800526] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:32.036 pt2 00:17:32.036 11:59:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:32.294 [2024-07-21 11:59:31.075591] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.294 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.553 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.553 "name": "raid_bdev1", 00:17:32.553 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:32.553 "strip_size_kb": 64, 00:17:32.553 "state": "configuring", 00:17:32.553 "raid_level": "raid0", 00:17:32.553 "superblock": true, 00:17:32.553 "num_base_bdevs": 3, 00:17:32.553 "num_base_bdevs_discovered": 1, 00:17:32.553 "num_base_bdevs_operational": 3, 00:17:32.553 "base_bdevs_list": [ 00:17:32.553 { 00:17:32.553 "name": "pt1", 00:17:32.553 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:32.553 "is_configured": true, 00:17:32.553 "data_offset": 2048, 00:17:32.553 "data_size": 63488 00:17:32.553 }, 00:17:32.553 { 00:17:32.553 "name": null, 00:17:32.553 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:32.553 "is_configured": false, 00:17:32.553 "data_offset": 2048, 00:17:32.553 "data_size": 63488 00:17:32.553 }, 00:17:32.553 { 00:17:32.553 "name": null, 00:17:32.553 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:32.553 "is_configured": false, 00:17:32.553 "data_offset": 2048, 00:17:32.553 "data_size": 63488 00:17:32.553 } 00:17:32.553 ] 00:17:32.553 }' 00:17:32.553 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.553 11:59:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.486 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:33.486 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:33.486 11:59:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:33.486 [2024-07-21 11:59:32.251801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:33.486 [2024-07-21 11:59:32.251945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.486 [2024-07-21 11:59:32.251997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:33.486 [2024-07-21 11:59:32.252028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.486 [2024-07-21 11:59:32.252512] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.486 [2024-07-21 11:59:32.252548] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:33.486 [2024-07-21 11:59:32.252659] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:33.486 [2024-07-21 11:59:32.252687] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:33.486 pt2 00:17:33.486 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:33.486 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:33.486 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:33.743 [2024-07-21 11:59:32.531869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:33.743 [2024-07-21 11:59:32.532005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:33.743 [2024-07-21 11:59:32.532046] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:33.743 [2024-07-21 11:59:32.532078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:33.743 [2024-07-21 11:59:32.532544] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:33.743 [2024-07-21 11:59:32.532582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:33.743 [2024-07-21 11:59:32.532688] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:33.743 [2024-07-21 11:59:32.532715] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:33.743 [2024-07-21 11:59:32.532851] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:17:33.743 [2024-07-21 11:59:32.532865] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:33.743 [2024-07-21 11:59:32.532949] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:33.743 [2024-07-21 11:59:32.533262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:17:33.743 [2024-07-21 11:59:32.533276] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:17:33.743 [2024-07-21 11:59:32.533438] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.743 pt3 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.743 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.001 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.001 "name": "raid_bdev1", 00:17:34.001 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:34.001 "strip_size_kb": 64, 00:17:34.001 "state": "online", 00:17:34.001 "raid_level": "raid0", 00:17:34.001 "superblock": true, 00:17:34.001 "num_base_bdevs": 3, 00:17:34.001 "num_base_bdevs_discovered": 3, 00:17:34.001 "num_base_bdevs_operational": 3, 00:17:34.001 "base_bdevs_list": [ 00:17:34.001 { 00:17:34.001 "name": "pt1", 00:17:34.001 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:34.001 "is_configured": true, 00:17:34.001 "data_offset": 2048, 00:17:34.001 "data_size": 63488 00:17:34.001 }, 00:17:34.001 { 00:17:34.001 "name": "pt2", 00:17:34.001 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:34.001 "is_configured": true, 00:17:34.001 "data_offset": 2048, 00:17:34.001 "data_size": 63488 00:17:34.001 }, 00:17:34.001 { 00:17:34.002 "name": "pt3", 00:17:34.002 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:34.002 "is_configured": true, 00:17:34.002 "data_offset": 2048, 00:17:34.002 "data_size": 63488 00:17:34.002 } 00:17:34.002 ] 00:17:34.002 }' 00:17:34.002 11:59:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.002 11:59:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:34.933 [2024-07-21 11:59:33.699304] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:34.933 "name": "raid_bdev1", 00:17:34.933 "aliases": [ 00:17:34.933 "cb28d6c5-9cd6-4434-85a0-0b5365359d0a" 00:17:34.933 ], 00:17:34.933 "product_name": "Raid Volume", 00:17:34.933 "block_size": 512, 00:17:34.933 "num_blocks": 190464, 00:17:34.933 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:34.933 "assigned_rate_limits": { 00:17:34.933 "rw_ios_per_sec": 0, 00:17:34.933 "rw_mbytes_per_sec": 0, 00:17:34.933 "r_mbytes_per_sec": 0, 00:17:34.933 "w_mbytes_per_sec": 0 00:17:34.933 }, 00:17:34.933 "claimed": false, 00:17:34.933 "zoned": false, 00:17:34.933 "supported_io_types": { 00:17:34.933 "read": true, 00:17:34.933 "write": true, 00:17:34.933 "unmap": true, 00:17:34.933 "write_zeroes": true, 00:17:34.933 "flush": true, 00:17:34.933 "reset": true, 00:17:34.933 "compare": false, 00:17:34.933 "compare_and_write": false, 00:17:34.933 "abort": false, 00:17:34.933 "nvme_admin": false, 00:17:34.933 "nvme_io": false 00:17:34.933 }, 00:17:34.933 "memory_domains": [ 00:17:34.933 { 00:17:34.933 "dma_device_id": "system", 00:17:34.933 "dma_device_type": 1 00:17:34.933 }, 00:17:34.933 { 00:17:34.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.933 "dma_device_type": 2 00:17:34.933 }, 00:17:34.933 { 00:17:34.933 "dma_device_id": "system", 00:17:34.933 "dma_device_type": 1 00:17:34.933 }, 00:17:34.933 { 00:17:34.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.933 "dma_device_type": 2 00:17:34.933 }, 00:17:34.933 { 00:17:34.933 "dma_device_id": "system", 00:17:34.933 "dma_device_type": 1 00:17:34.933 }, 00:17:34.933 { 00:17:34.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.933 "dma_device_type": 2 00:17:34.933 } 00:17:34.933 ], 00:17:34.933 "driver_specific": { 00:17:34.933 "raid": { 00:17:34.933 "uuid": "cb28d6c5-9cd6-4434-85a0-0b5365359d0a", 00:17:34.933 "strip_size_kb": 64, 00:17:34.933 "state": "online", 00:17:34.933 "raid_level": "raid0", 00:17:34.933 "superblock": true, 00:17:34.933 "num_base_bdevs": 3, 00:17:34.933 "num_base_bdevs_discovered": 3, 00:17:34.933 "num_base_bdevs_operational": 3, 00:17:34.933 "base_bdevs_list": [ 00:17:34.933 { 00:17:34.933 "name": "pt1", 00:17:34.933 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:34.933 "is_configured": true, 00:17:34.933 "data_offset": 2048, 00:17:34.933 "data_size": 63488 00:17:34.933 }, 00:17:34.933 { 00:17:34.933 "name": "pt2", 00:17:34.933 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:34.933 "is_configured": true, 00:17:34.933 "data_offset": 2048, 00:17:34.933 "data_size": 63488 00:17:34.933 }, 00:17:34.933 { 00:17:34.933 "name": "pt3", 00:17:34.933 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:34.933 "is_configured": true, 00:17:34.933 "data_offset": 2048, 00:17:34.933 "data_size": 63488 00:17:34.933 } 00:17:34.933 ] 00:17:34.933 } 00:17:34.933 } 00:17:34.933 }' 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:34.933 pt2 00:17:34.933 pt3' 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:34.933 11:59:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:35.190 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:35.190 "name": "pt1", 00:17:35.190 "aliases": [ 00:17:35.190 "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f" 00:17:35.190 ], 00:17:35.191 "product_name": "passthru", 00:17:35.191 "block_size": 512, 00:17:35.191 "num_blocks": 65536, 00:17:35.191 "uuid": "4c04beb3-5e6d-5d2f-a9ee-8e69d14c019f", 00:17:35.191 "assigned_rate_limits": { 00:17:35.191 "rw_ios_per_sec": 0, 00:17:35.191 "rw_mbytes_per_sec": 0, 00:17:35.191 "r_mbytes_per_sec": 0, 00:17:35.191 "w_mbytes_per_sec": 0 00:17:35.191 }, 00:17:35.191 "claimed": true, 00:17:35.191 "claim_type": "exclusive_write", 00:17:35.191 "zoned": false, 00:17:35.191 "supported_io_types": { 00:17:35.191 "read": true, 00:17:35.191 "write": true, 00:17:35.191 "unmap": true, 00:17:35.191 "write_zeroes": true, 00:17:35.191 "flush": true, 00:17:35.191 "reset": true, 00:17:35.191 "compare": false, 00:17:35.191 "compare_and_write": false, 00:17:35.191 "abort": true, 00:17:35.191 "nvme_admin": false, 00:17:35.191 "nvme_io": false 00:17:35.191 }, 00:17:35.191 "memory_domains": [ 00:17:35.191 { 00:17:35.191 "dma_device_id": "system", 00:17:35.191 "dma_device_type": 1 00:17:35.191 }, 00:17:35.191 { 00:17:35.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.191 "dma_device_type": 2 00:17:35.191 } 00:17:35.191 ], 00:17:35.191 "driver_specific": { 00:17:35.191 "passthru": { 00:17:35.191 "name": "pt1", 00:17:35.191 "base_bdev_name": "malloc1" 00:17:35.191 } 00:17:35.191 } 00:17:35.191 }' 00:17:35.191 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.447 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.447 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:35.447 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.447 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:35.447 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:35.447 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.447 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:35.705 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:35.705 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.705 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:35.705 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:35.705 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:35.705 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:35.705 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:35.974 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:35.974 "name": "pt2", 00:17:35.974 "aliases": [ 00:17:35.974 "91849f22-3b7f-564b-bc79-788038d2145f" 00:17:35.974 ], 00:17:35.974 "product_name": "passthru", 00:17:35.974 "block_size": 512, 00:17:35.974 "num_blocks": 65536, 00:17:35.974 "uuid": "91849f22-3b7f-564b-bc79-788038d2145f", 00:17:35.974 "assigned_rate_limits": { 00:17:35.974 "rw_ios_per_sec": 0, 00:17:35.974 "rw_mbytes_per_sec": 0, 00:17:35.974 "r_mbytes_per_sec": 0, 00:17:35.974 "w_mbytes_per_sec": 0 00:17:35.974 }, 00:17:35.974 "claimed": true, 00:17:35.974 "claim_type": "exclusive_write", 00:17:35.974 "zoned": false, 00:17:35.974 "supported_io_types": { 00:17:35.974 "read": true, 00:17:35.974 "write": true, 00:17:35.974 "unmap": true, 00:17:35.974 "write_zeroes": true, 00:17:35.974 "flush": true, 00:17:35.974 "reset": true, 00:17:35.974 "compare": false, 00:17:35.974 "compare_and_write": false, 00:17:35.974 "abort": true, 00:17:35.974 "nvme_admin": false, 00:17:35.974 "nvme_io": false 00:17:35.974 }, 00:17:35.974 "memory_domains": [ 00:17:35.974 { 00:17:35.974 "dma_device_id": "system", 00:17:35.974 "dma_device_type": 1 00:17:35.974 }, 00:17:35.974 { 00:17:35.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:35.974 "dma_device_type": 2 00:17:35.974 } 00:17:35.974 ], 00:17:35.974 "driver_specific": { 00:17:35.974 "passthru": { 00:17:35.974 "name": "pt2", 00:17:35.974 "base_bdev_name": "malloc2" 00:17:35.974 } 00:17:35.974 } 00:17:35.974 }' 00:17:35.974 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.974 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:35.974 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:35.974 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.231 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.231 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:36.231 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.231 11:59:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:36.231 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:36.231 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.231 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:36.489 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:36.489 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.489 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:17:36.489 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:36.747 "name": "pt3", 00:17:36.747 "aliases": [ 00:17:36.747 "d45ffd3b-6493-5aed-bcf9-fe33e1742042" 00:17:36.747 ], 00:17:36.747 "product_name": "passthru", 00:17:36.747 "block_size": 512, 00:17:36.747 "num_blocks": 65536, 00:17:36.747 "uuid": "d45ffd3b-6493-5aed-bcf9-fe33e1742042", 00:17:36.747 "assigned_rate_limits": { 00:17:36.747 "rw_ios_per_sec": 0, 00:17:36.747 "rw_mbytes_per_sec": 0, 00:17:36.747 "r_mbytes_per_sec": 0, 00:17:36.747 "w_mbytes_per_sec": 0 00:17:36.747 }, 00:17:36.747 "claimed": true, 00:17:36.747 "claim_type": "exclusive_write", 00:17:36.747 "zoned": false, 00:17:36.747 "supported_io_types": { 00:17:36.747 "read": true, 00:17:36.747 "write": true, 00:17:36.747 "unmap": true, 00:17:36.747 "write_zeroes": true, 00:17:36.747 "flush": true, 00:17:36.747 "reset": true, 00:17:36.747 "compare": false, 00:17:36.747 "compare_and_write": false, 00:17:36.747 "abort": true, 00:17:36.747 "nvme_admin": false, 00:17:36.747 "nvme_io": false 00:17:36.747 }, 00:17:36.747 "memory_domains": [ 00:17:36.747 { 00:17:36.747 "dma_device_id": "system", 00:17:36.747 "dma_device_type": 1 00:17:36.747 }, 00:17:36.747 { 00:17:36.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.747 "dma_device_type": 2 00:17:36.747 } 00:17:36.747 ], 00:17:36.747 "driver_specific": { 00:17:36.747 "passthru": { 00:17:36.747 "name": "pt3", 00:17:36.747 "base_bdev_name": "malloc3" 00:17:36.747 } 00:17:36.747 } 00:17:36.747 }' 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:36.747 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.005 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.005 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:37.005 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.005 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.005 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:37.005 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.005 11:59:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:37.263 [2024-07-21 11:59:36.011940] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' cb28d6c5-9cd6-4434-85a0-0b5365359d0a '!=' cb28d6c5-9cd6-4434-85a0-0b5365359d0a ']' 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 137783 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 137783 ']' 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 137783 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 137783 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 137783' 00:17:37.263 killing process with pid 137783 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 137783 00:17:37.263 [2024-07-21 11:59:36.056049] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.263 [2024-07-21 11:59:36.056156] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.263 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 137783 00:17:37.263 [2024-07-21 11:59:36.056226] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.263 [2024-07-21 11:59:36.056240] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:17:37.263 [2024-07-21 11:59:36.085685] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.522 11:59:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:17:37.522 00:17:37.522 real 0m15.192s 00:17:37.522 user 0m28.216s 00:17:37.522 sys 0m1.969s 00:17:37.522 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:37.522 11:59:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.522 ************************************ 00:17:37.522 END TEST raid_superblock_test 00:17:37.522 ************************************ 00:17:37.522 11:59:36 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:17:37.522 11:59:36 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:37.522 11:59:36 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.522 11:59:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:37.522 ************************************ 00:17:37.522 START TEST raid_read_error_test 00:17:37.522 ************************************ 00:17:37.522 11:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 read 00:17:37.522 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:37.522 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:37.522 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.gMt7DB75r3 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138271 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138271 /var/tmp/spdk-raid.sock 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 138271 ']' 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.780 11:59:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.780 [2024-07-21 11:59:36.456517] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:37.780 [2024-07-21 11:59:36.457495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138271 ] 00:17:37.780 [2024-07-21 11:59:36.623309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.039 [2024-07-21 11:59:36.709511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.039 [2024-07-21 11:59:36.764659] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:38.604 11:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.604 11:59:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:17:38.604 11:59:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:38.604 11:59:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:38.862 BaseBdev1_malloc 00:17:38.862 11:59:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:39.121 true 00:17:39.121 11:59:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:39.379 [2024-07-21 11:59:38.145064] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:39.379 [2024-07-21 11:59:38.145233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.379 [2024-07-21 11:59:38.145289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:39.379 [2024-07-21 11:59:38.145340] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.379 [2024-07-21 11:59:38.148190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.379 [2024-07-21 11:59:38.148261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.379 BaseBdev1 00:17:39.379 11:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:39.379 11:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:39.636 BaseBdev2_malloc 00:17:39.636 11:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:39.894 true 00:17:39.894 11:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:40.151 [2024-07-21 11:59:38.803912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:40.151 [2024-07-21 11:59:38.804057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.151 [2024-07-21 11:59:38.804128] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:40.151 [2024-07-21 11:59:38.804171] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.151 [2024-07-21 11:59:38.806766] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.151 [2024-07-21 11:59:38.806840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:40.151 BaseBdev2 00:17:40.151 11:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:40.151 11:59:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:40.408 BaseBdev3_malloc 00:17:40.408 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:40.408 true 00:17:40.408 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:40.666 [2024-07-21 11:59:39.458029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:40.666 [2024-07-21 11:59:39.458177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.666 [2024-07-21 11:59:39.458226] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:40.666 [2024-07-21 11:59:39.458277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.666 [2024-07-21 11:59:39.461207] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.666 [2024-07-21 11:59:39.461319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:40.666 BaseBdev3 00:17:40.666 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:40.924 [2024-07-21 11:59:39.714244] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.924 [2024-07-21 11:59:39.716668] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:40.924 [2024-07-21 11:59:39.716796] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:40.924 [2024-07-21 11:59:39.717143] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:17:40.924 [2024-07-21 11:59:39.717171] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:40.924 [2024-07-21 11:59:39.717345] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:40.924 [2024-07-21 11:59:39.717866] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:17:40.924 [2024-07-21 11:59:39.717889] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:17:40.924 [2024-07-21 11:59:39.718196] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.924 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.182 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.182 "name": "raid_bdev1", 00:17:41.182 "uuid": "a479ecfa-c904-4c49-b9f3-dfc4cbd9e12c", 00:17:41.182 "strip_size_kb": 64, 00:17:41.182 "state": "online", 00:17:41.182 "raid_level": "raid0", 00:17:41.182 "superblock": true, 00:17:41.182 "num_base_bdevs": 3, 00:17:41.182 "num_base_bdevs_discovered": 3, 00:17:41.182 "num_base_bdevs_operational": 3, 00:17:41.182 "base_bdevs_list": [ 00:17:41.182 { 00:17:41.182 "name": "BaseBdev1", 00:17:41.182 "uuid": "cc476f56-deb7-50bd-a6fb-abb62b23689a", 00:17:41.182 "is_configured": true, 00:17:41.182 "data_offset": 2048, 00:17:41.182 "data_size": 63488 00:17:41.182 }, 00:17:41.182 { 00:17:41.182 "name": "BaseBdev2", 00:17:41.182 "uuid": "e5eed8e3-af3f-5eb3-991b-c38c69935691", 00:17:41.182 "is_configured": true, 00:17:41.182 "data_offset": 2048, 00:17:41.182 "data_size": 63488 00:17:41.182 }, 00:17:41.182 { 00:17:41.182 "name": "BaseBdev3", 00:17:41.182 "uuid": "4b275f84-1e7f-5f7c-8658-6714d586eb7e", 00:17:41.182 "is_configured": true, 00:17:41.182 "data_offset": 2048, 00:17:41.182 "data_size": 63488 00:17:41.182 } 00:17:41.182 ] 00:17:41.182 }' 00:17:41.182 11:59:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.182 11:59:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:41.747 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:41.747 11:59:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:42.004 [2024-07-21 11:59:40.691380] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:42.935 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.192 11:59:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.449 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.449 "name": "raid_bdev1", 00:17:43.449 "uuid": "a479ecfa-c904-4c49-b9f3-dfc4cbd9e12c", 00:17:43.449 "strip_size_kb": 64, 00:17:43.449 "state": "online", 00:17:43.449 "raid_level": "raid0", 00:17:43.449 "superblock": true, 00:17:43.449 "num_base_bdevs": 3, 00:17:43.449 "num_base_bdevs_discovered": 3, 00:17:43.449 "num_base_bdevs_operational": 3, 00:17:43.449 "base_bdevs_list": [ 00:17:43.449 { 00:17:43.449 "name": "BaseBdev1", 00:17:43.449 "uuid": "cc476f56-deb7-50bd-a6fb-abb62b23689a", 00:17:43.450 "is_configured": true, 00:17:43.450 "data_offset": 2048, 00:17:43.450 "data_size": 63488 00:17:43.450 }, 00:17:43.450 { 00:17:43.450 "name": "BaseBdev2", 00:17:43.450 "uuid": "e5eed8e3-af3f-5eb3-991b-c38c69935691", 00:17:43.450 "is_configured": true, 00:17:43.450 "data_offset": 2048, 00:17:43.450 "data_size": 63488 00:17:43.450 }, 00:17:43.450 { 00:17:43.450 "name": "BaseBdev3", 00:17:43.450 "uuid": "4b275f84-1e7f-5f7c-8658-6714d586eb7e", 00:17:43.450 "is_configured": true, 00:17:43.450 "data_offset": 2048, 00:17:43.450 "data_size": 63488 00:17:43.450 } 00:17:43.450 ] 00:17:43.450 }' 00:17:43.450 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.450 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.048 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:44.320 [2024-07-21 11:59:42.934615] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.320 [2024-07-21 11:59:42.934674] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:44.320 [2024-07-21 11:59:42.937771] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:44.320 [2024-07-21 11:59:42.937859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.320 [2024-07-21 11:59:42.937903] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:44.320 [2024-07-21 11:59:42.937914] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:17:44.320 0 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138271 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 138271 ']' 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 138271 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 138271 00:17:44.320 killing process with pid 138271 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 138271' 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 138271 00:17:44.320 11:59:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 138271 00:17:44.320 [2024-07-21 11:59:42.981339] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:44.320 [2024-07-21 11:59:43.006876] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.gMt7DB75r3 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:44.579 ************************************ 00:17:44.579 END TEST raid_read_error_test 00:17:44.579 ************************************ 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.45 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.45 != \0\.\0\0 ]] 00:17:44.579 00:17:44.579 real 0m6.886s 00:17:44.579 user 0m11.123s 00:17:44.579 sys 0m0.857s 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:44.579 11:59:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 11:59:43 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:17:44.579 11:59:43 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:44.579 11:59:43 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:44.579 11:59:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.579 ************************************ 00:17:44.579 START TEST raid_write_error_test 00:17:44.579 ************************************ 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 3 write 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:17:44.579 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.egKOStL81q 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=138466 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 138466 /var/tmp/spdk-raid.sock 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 138466 ']' 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:44.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:44.580 11:59:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:44.580 [2024-07-21 11:59:43.400552] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:44.580 [2024-07-21 11:59:43.400804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138466 ] 00:17:44.839 [2024-07-21 11:59:43.566363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.839 [2024-07-21 11:59:43.657955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:45.096 [2024-07-21 11:59:43.713235] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:45.661 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:45.661 11:59:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:17:45.661 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:45.661 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:45.919 BaseBdev1_malloc 00:17:45.919 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:17:46.177 true 00:17:46.177 11:59:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:17:46.434 [2024-07-21 11:59:45.087564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:17:46.434 [2024-07-21 11:59:45.087702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.434 [2024-07-21 11:59:45.087749] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:46.434 [2024-07-21 11:59:45.087799] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.434 [2024-07-21 11:59:45.090845] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.434 [2024-07-21 11:59:45.090908] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:46.434 BaseBdev1 00:17:46.434 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:46.434 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:46.692 BaseBdev2_malloc 00:17:46.692 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:17:46.950 true 00:17:46.950 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:17:46.950 [2024-07-21 11:59:45.786561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:17:46.950 [2024-07-21 11:59:45.786707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.950 [2024-07-21 11:59:45.786778] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:17:46.950 [2024-07-21 11:59:45.786820] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.950 [2024-07-21 11:59:45.789379] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.950 [2024-07-21 11:59:45.789451] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:46.950 BaseBdev2 00:17:46.950 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:17:46.950 11:59:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:47.208 BaseBdev3_malloc 00:17:47.208 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:17:47.466 true 00:17:47.466 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:17:47.724 [2024-07-21 11:59:46.503987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:17:47.724 [2024-07-21 11:59:46.504114] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:47.724 [2024-07-21 11:59:46.504161] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:47.724 [2024-07-21 11:59:46.504213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:47.724 [2024-07-21 11:59:46.506804] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:47.724 [2024-07-21 11:59:46.506860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:47.724 BaseBdev3 00:17:47.724 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:17:47.983 [2024-07-21 11:59:46.728174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:47.983 [2024-07-21 11:59:46.730564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:47.983 [2024-07-21 11:59:46.730793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:47.983 [2024-07-21 11:59:46.731071] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:17:47.983 [2024-07-21 11:59:46.731105] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:47.983 [2024-07-21 11:59:46.731246] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:17:47.983 [2024-07-21 11:59:46.731721] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:17:47.983 [2024-07-21 11:59:46.731761] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:17:47.983 [2024-07-21 11:59:46.732061] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.983 11:59:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.241 11:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:48.241 "name": "raid_bdev1", 00:17:48.241 "uuid": "4614dd58-8da8-4da1-bc7c-97eebf57f337", 00:17:48.241 "strip_size_kb": 64, 00:17:48.241 "state": "online", 00:17:48.241 "raid_level": "raid0", 00:17:48.241 "superblock": true, 00:17:48.241 "num_base_bdevs": 3, 00:17:48.241 "num_base_bdevs_discovered": 3, 00:17:48.241 "num_base_bdevs_operational": 3, 00:17:48.241 "base_bdevs_list": [ 00:17:48.241 { 00:17:48.241 "name": "BaseBdev1", 00:17:48.241 "uuid": "91d517ee-f5f2-541a-b5ad-40de923c6aa9", 00:17:48.241 "is_configured": true, 00:17:48.241 "data_offset": 2048, 00:17:48.241 "data_size": 63488 00:17:48.241 }, 00:17:48.241 { 00:17:48.241 "name": "BaseBdev2", 00:17:48.241 "uuid": "c898ff34-a8be-5017-ada4-fa1af282ccfd", 00:17:48.241 "is_configured": true, 00:17:48.241 "data_offset": 2048, 00:17:48.241 "data_size": 63488 00:17:48.241 }, 00:17:48.241 { 00:17:48.241 "name": "BaseBdev3", 00:17:48.241 "uuid": "26df3c24-856d-5202-9bfa-a2e20b3e0ff0", 00:17:48.241 "is_configured": true, 00:17:48.241 "data_offset": 2048, 00:17:48.241 "data_size": 63488 00:17:48.241 } 00:17:48.241 ] 00:17:48.241 }' 00:17:48.241 11:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:48.241 11:59:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:49.174 11:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:49.174 11:59:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:49.174 [2024-07-21 11:59:47.760786] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.105 11:59:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.362 11:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.362 "name": "raid_bdev1", 00:17:50.362 "uuid": "4614dd58-8da8-4da1-bc7c-97eebf57f337", 00:17:50.362 "strip_size_kb": 64, 00:17:50.362 "state": "online", 00:17:50.362 "raid_level": "raid0", 00:17:50.362 "superblock": true, 00:17:50.362 "num_base_bdevs": 3, 00:17:50.362 "num_base_bdevs_discovered": 3, 00:17:50.362 "num_base_bdevs_operational": 3, 00:17:50.362 "base_bdevs_list": [ 00:17:50.362 { 00:17:50.362 "name": "BaseBdev1", 00:17:50.362 "uuid": "91d517ee-f5f2-541a-b5ad-40de923c6aa9", 00:17:50.362 "is_configured": true, 00:17:50.362 "data_offset": 2048, 00:17:50.362 "data_size": 63488 00:17:50.362 }, 00:17:50.362 { 00:17:50.362 "name": "BaseBdev2", 00:17:50.362 "uuid": "c898ff34-a8be-5017-ada4-fa1af282ccfd", 00:17:50.362 "is_configured": true, 00:17:50.362 "data_offset": 2048, 00:17:50.362 "data_size": 63488 00:17:50.362 }, 00:17:50.362 { 00:17:50.362 "name": "BaseBdev3", 00:17:50.362 "uuid": "26df3c24-856d-5202-9bfa-a2e20b3e0ff0", 00:17:50.362 "is_configured": true, 00:17:50.362 "data_offset": 2048, 00:17:50.362 "data_size": 63488 00:17:50.362 } 00:17:50.362 ] 00:17:50.362 }' 00:17:50.362 11:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.362 11:59:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.295 11:59:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:51.295 [2024-07-21 11:59:50.112070] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:51.295 [2024-07-21 11:59:50.112126] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:51.295 [2024-07-21 11:59:50.115049] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.295 [2024-07-21 11:59:50.115157] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.295 [2024-07-21 11:59:50.115204] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.295 [2024-07-21 11:59:50.115216] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:17:51.295 0 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 138466 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 138466 ']' 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 138466 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 138466 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 138466' 00:17:51.295 killing process with pid 138466 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 138466 00:17:51.295 [2024-07-21 11:59:50.155896] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.295 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 138466 00:17:51.551 [2024-07-21 11:59:50.186479] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.egKOStL81q 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:17:51.809 00:17:51.809 real 0m7.139s 00:17:51.809 user 0m11.730s 00:17:51.809 sys 0m0.767s 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:51.809 11:59:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.809 ************************************ 00:17:51.809 END TEST raid_write_error_test 00:17:51.809 ************************************ 00:17:51.809 11:59:50 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:17:51.809 11:59:50 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:17:51.809 11:59:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:17:51.809 11:59:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:51.809 11:59:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.809 ************************************ 00:17:51.809 START TEST raid_state_function_test 00:17:51.809 ************************************ 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 false 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=138657 00:17:51.809 Process raid pid: 138657 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 138657' 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 138657 /var/tmp/spdk-raid.sock 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 138657 ']' 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:51.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:51.809 11:59:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:51.809 [2024-07-21 11:59:50.586141] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:51.809 [2024-07-21 11:59:50.586391] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.067 [2024-07-21 11:59:50.755136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.067 [2024-07-21 11:59:50.854430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.067 [2024-07-21 11:59:50.908750] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:53.001 [2024-07-21 11:59:51.828925] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.001 [2024-07-21 11:59:51.829029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.001 [2024-07-21 11:59:51.829044] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.001 [2024-07-21 11:59:51.829068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.001 [2024-07-21 11:59:51.829079] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.001 [2024-07-21 11:59:51.829120] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.001 11:59:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.567 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.567 "name": "Existed_Raid", 00:17:53.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.567 "strip_size_kb": 64, 00:17:53.567 "state": "configuring", 00:17:53.567 "raid_level": "concat", 00:17:53.567 "superblock": false, 00:17:53.567 "num_base_bdevs": 3, 00:17:53.567 "num_base_bdevs_discovered": 0, 00:17:53.567 "num_base_bdevs_operational": 3, 00:17:53.567 "base_bdevs_list": [ 00:17:53.567 { 00:17:53.567 "name": "BaseBdev1", 00:17:53.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.567 "is_configured": false, 00:17:53.567 "data_offset": 0, 00:17:53.567 "data_size": 0 00:17:53.567 }, 00:17:53.567 { 00:17:53.567 "name": "BaseBdev2", 00:17:53.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.567 "is_configured": false, 00:17:53.567 "data_offset": 0, 00:17:53.567 "data_size": 0 00:17:53.567 }, 00:17:53.567 { 00:17:53.567 "name": "BaseBdev3", 00:17:53.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.567 "is_configured": false, 00:17:53.567 "data_offset": 0, 00:17:53.567 "data_size": 0 00:17:53.567 } 00:17:53.567 ] 00:17:53.567 }' 00:17:53.567 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.567 11:59:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:54.133 11:59:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.133 [2024-07-21 11:59:52.997072] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.133 [2024-07-21 11:59:52.997143] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:17:54.391 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:54.649 [2024-07-21 11:59:53.261077] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:54.649 [2024-07-21 11:59:53.261185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:54.649 [2024-07-21 11:59:53.261200] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:54.649 [2024-07-21 11:59:53.261220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:54.649 [2024-07-21 11:59:53.261228] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:54.649 [2024-07-21 11:59:53.261254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:54.649 [2024-07-21 11:59:53.496313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.649 BaseBdev1 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:54.649 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:54.906 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.164 [ 00:17:55.164 { 00:17:55.164 "name": "BaseBdev1", 00:17:55.164 "aliases": [ 00:17:55.164 "ef5e11dd-ab16-4097-a9a5-6a3bc2506463" 00:17:55.164 ], 00:17:55.164 "product_name": "Malloc disk", 00:17:55.164 "block_size": 512, 00:17:55.164 "num_blocks": 65536, 00:17:55.164 "uuid": "ef5e11dd-ab16-4097-a9a5-6a3bc2506463", 00:17:55.164 "assigned_rate_limits": { 00:17:55.164 "rw_ios_per_sec": 0, 00:17:55.164 "rw_mbytes_per_sec": 0, 00:17:55.164 "r_mbytes_per_sec": 0, 00:17:55.164 "w_mbytes_per_sec": 0 00:17:55.164 }, 00:17:55.164 "claimed": true, 00:17:55.164 "claim_type": "exclusive_write", 00:17:55.164 "zoned": false, 00:17:55.164 "supported_io_types": { 00:17:55.164 "read": true, 00:17:55.164 "write": true, 00:17:55.164 "unmap": true, 00:17:55.164 "write_zeroes": true, 00:17:55.164 "flush": true, 00:17:55.164 "reset": true, 00:17:55.164 "compare": false, 00:17:55.164 "compare_and_write": false, 00:17:55.164 "abort": true, 00:17:55.164 "nvme_admin": false, 00:17:55.164 "nvme_io": false 00:17:55.164 }, 00:17:55.164 "memory_domains": [ 00:17:55.164 { 00:17:55.164 "dma_device_id": "system", 00:17:55.164 "dma_device_type": 1 00:17:55.164 }, 00:17:55.164 { 00:17:55.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.164 "dma_device_type": 2 00:17:55.164 } 00:17:55.164 ], 00:17:55.164 "driver_specific": {} 00:17:55.164 } 00:17:55.164 ] 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.164 11:59:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.422 11:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.422 "name": "Existed_Raid", 00:17:55.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.422 "strip_size_kb": 64, 00:17:55.422 "state": "configuring", 00:17:55.422 "raid_level": "concat", 00:17:55.422 "superblock": false, 00:17:55.422 "num_base_bdevs": 3, 00:17:55.422 "num_base_bdevs_discovered": 1, 00:17:55.422 "num_base_bdevs_operational": 3, 00:17:55.422 "base_bdevs_list": [ 00:17:55.422 { 00:17:55.422 "name": "BaseBdev1", 00:17:55.422 "uuid": "ef5e11dd-ab16-4097-a9a5-6a3bc2506463", 00:17:55.422 "is_configured": true, 00:17:55.422 "data_offset": 0, 00:17:55.422 "data_size": 65536 00:17:55.422 }, 00:17:55.422 { 00:17:55.422 "name": "BaseBdev2", 00:17:55.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.422 "is_configured": false, 00:17:55.422 "data_offset": 0, 00:17:55.422 "data_size": 0 00:17:55.422 }, 00:17:55.422 { 00:17:55.422 "name": "BaseBdev3", 00:17:55.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.422 "is_configured": false, 00:17:55.422 "data_offset": 0, 00:17:55.422 "data_size": 0 00:17:55.422 } 00:17:55.422 ] 00:17:55.422 }' 00:17:55.422 11:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.422 11:59:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:55.987 11:59:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:56.245 [2024-07-21 11:59:55.028745] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:56.245 [2024-07-21 11:59:55.028840] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:56.245 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:17:56.502 [2024-07-21 11:59:55.292839] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.502 [2024-07-21 11:59:55.295079] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:56.503 [2024-07-21 11:59:55.295154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:56.503 [2024-07-21 11:59:55.295167] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:56.503 [2024-07-21 11:59:55.295213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.503 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.760 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.760 "name": "Existed_Raid", 00:17:56.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.760 "strip_size_kb": 64, 00:17:56.760 "state": "configuring", 00:17:56.760 "raid_level": "concat", 00:17:56.760 "superblock": false, 00:17:56.760 "num_base_bdevs": 3, 00:17:56.760 "num_base_bdevs_discovered": 1, 00:17:56.760 "num_base_bdevs_operational": 3, 00:17:56.760 "base_bdevs_list": [ 00:17:56.760 { 00:17:56.760 "name": "BaseBdev1", 00:17:56.760 "uuid": "ef5e11dd-ab16-4097-a9a5-6a3bc2506463", 00:17:56.760 "is_configured": true, 00:17:56.760 "data_offset": 0, 00:17:56.760 "data_size": 65536 00:17:56.760 }, 00:17:56.760 { 00:17:56.760 "name": "BaseBdev2", 00:17:56.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.760 "is_configured": false, 00:17:56.761 "data_offset": 0, 00:17:56.761 "data_size": 0 00:17:56.761 }, 00:17:56.761 { 00:17:56.761 "name": "BaseBdev3", 00:17:56.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.761 "is_configured": false, 00:17:56.761 "data_offset": 0, 00:17:56.761 "data_size": 0 00:17:56.761 } 00:17:56.761 ] 00:17:56.761 }' 00:17:56.761 11:59:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.761 11:59:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:57.325 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:57.583 [2024-07-21 11:59:56.402545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:57.583 BaseBdev2 00:17:57.583 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:57.583 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:17:57.583 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:57.583 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:57.583 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:57.583 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:57.583 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.148 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:58.148 [ 00:17:58.148 { 00:17:58.148 "name": "BaseBdev2", 00:17:58.148 "aliases": [ 00:17:58.148 "8cd53695-5942-4b41-810d-06a2561d48df" 00:17:58.148 ], 00:17:58.148 "product_name": "Malloc disk", 00:17:58.148 "block_size": 512, 00:17:58.148 "num_blocks": 65536, 00:17:58.148 "uuid": "8cd53695-5942-4b41-810d-06a2561d48df", 00:17:58.148 "assigned_rate_limits": { 00:17:58.148 "rw_ios_per_sec": 0, 00:17:58.148 "rw_mbytes_per_sec": 0, 00:17:58.148 "r_mbytes_per_sec": 0, 00:17:58.149 "w_mbytes_per_sec": 0 00:17:58.149 }, 00:17:58.149 "claimed": true, 00:17:58.149 "claim_type": "exclusive_write", 00:17:58.149 "zoned": false, 00:17:58.149 "supported_io_types": { 00:17:58.149 "read": true, 00:17:58.149 "write": true, 00:17:58.149 "unmap": true, 00:17:58.149 "write_zeroes": true, 00:17:58.149 "flush": true, 00:17:58.149 "reset": true, 00:17:58.149 "compare": false, 00:17:58.149 "compare_and_write": false, 00:17:58.149 "abort": true, 00:17:58.149 "nvme_admin": false, 00:17:58.149 "nvme_io": false 00:17:58.149 }, 00:17:58.149 "memory_domains": [ 00:17:58.149 { 00:17:58.149 "dma_device_id": "system", 00:17:58.149 "dma_device_type": 1 00:17:58.149 }, 00:17:58.149 { 00:17:58.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.149 "dma_device_type": 2 00:17:58.149 } 00:17:58.149 ], 00:17:58.149 "driver_specific": {} 00:17:58.149 } 00:17:58.149 ] 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.149 11:59:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.406 11:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.406 "name": "Existed_Raid", 00:17:58.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.406 "strip_size_kb": 64, 00:17:58.406 "state": "configuring", 00:17:58.406 "raid_level": "concat", 00:17:58.406 "superblock": false, 00:17:58.406 "num_base_bdevs": 3, 00:17:58.406 "num_base_bdevs_discovered": 2, 00:17:58.406 "num_base_bdevs_operational": 3, 00:17:58.406 "base_bdevs_list": [ 00:17:58.406 { 00:17:58.406 "name": "BaseBdev1", 00:17:58.406 "uuid": "ef5e11dd-ab16-4097-a9a5-6a3bc2506463", 00:17:58.406 "is_configured": true, 00:17:58.406 "data_offset": 0, 00:17:58.406 "data_size": 65536 00:17:58.406 }, 00:17:58.406 { 00:17:58.406 "name": "BaseBdev2", 00:17:58.406 "uuid": "8cd53695-5942-4b41-810d-06a2561d48df", 00:17:58.406 "is_configured": true, 00:17:58.406 "data_offset": 0, 00:17:58.406 "data_size": 65536 00:17:58.406 }, 00:17:58.406 { 00:17:58.406 "name": "BaseBdev3", 00:17:58.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.406 "is_configured": false, 00:17:58.406 "data_offset": 0, 00:17:58.406 "data_size": 0 00:17:58.406 } 00:17:58.406 ] 00:17:58.406 }' 00:17:58.406 11:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.406 11:59:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:58.970 11:59:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:59.227 [2024-07-21 11:59:58.032347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:59.227 [2024-07-21 11:59:58.032433] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:17:59.227 [2024-07-21 11:59:58.032446] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:59.227 [2024-07-21 11:59:58.032593] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:59.227 [2024-07-21 11:59:58.033050] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:17:59.227 [2024-07-21 11:59:58.033076] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:17:59.227 [2024-07-21 11:59:58.033365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.227 BaseBdev3 00:17:59.227 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:17:59.227 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:17:59.227 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:59.227 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:17:59.227 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:59.227 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:59.227 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.484 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:59.742 [ 00:17:59.742 { 00:17:59.742 "name": "BaseBdev3", 00:17:59.742 "aliases": [ 00:17:59.742 "7de2eea8-9eb4-46e0-9a2c-26bfabc1ca5a" 00:17:59.742 ], 00:17:59.742 "product_name": "Malloc disk", 00:17:59.742 "block_size": 512, 00:17:59.742 "num_blocks": 65536, 00:17:59.742 "uuid": "7de2eea8-9eb4-46e0-9a2c-26bfabc1ca5a", 00:17:59.742 "assigned_rate_limits": { 00:17:59.742 "rw_ios_per_sec": 0, 00:17:59.742 "rw_mbytes_per_sec": 0, 00:17:59.742 "r_mbytes_per_sec": 0, 00:17:59.742 "w_mbytes_per_sec": 0 00:17:59.742 }, 00:17:59.742 "claimed": true, 00:17:59.742 "claim_type": "exclusive_write", 00:17:59.742 "zoned": false, 00:17:59.742 "supported_io_types": { 00:17:59.742 "read": true, 00:17:59.742 "write": true, 00:17:59.742 "unmap": true, 00:17:59.742 "write_zeroes": true, 00:17:59.742 "flush": true, 00:17:59.742 "reset": true, 00:17:59.742 "compare": false, 00:17:59.742 "compare_and_write": false, 00:17:59.742 "abort": true, 00:17:59.742 "nvme_admin": false, 00:17:59.742 "nvme_io": false 00:17:59.742 }, 00:17:59.742 "memory_domains": [ 00:17:59.742 { 00:17:59.742 "dma_device_id": "system", 00:17:59.742 "dma_device_type": 1 00:17:59.742 }, 00:17:59.742 { 00:17:59.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.742 "dma_device_type": 2 00:17:59.742 } 00:17:59.742 ], 00:17:59.742 "driver_specific": {} 00:17:59.742 } 00:17:59.742 ] 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.742 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.000 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:00.000 "name": "Existed_Raid", 00:18:00.000 "uuid": "af4ac885-0617-4ebe-9ed6-20e7405f5daa", 00:18:00.001 "strip_size_kb": 64, 00:18:00.001 "state": "online", 00:18:00.001 "raid_level": "concat", 00:18:00.001 "superblock": false, 00:18:00.001 "num_base_bdevs": 3, 00:18:00.001 "num_base_bdevs_discovered": 3, 00:18:00.001 "num_base_bdevs_operational": 3, 00:18:00.001 "base_bdevs_list": [ 00:18:00.001 { 00:18:00.001 "name": "BaseBdev1", 00:18:00.001 "uuid": "ef5e11dd-ab16-4097-a9a5-6a3bc2506463", 00:18:00.001 "is_configured": true, 00:18:00.001 "data_offset": 0, 00:18:00.001 "data_size": 65536 00:18:00.001 }, 00:18:00.001 { 00:18:00.001 "name": "BaseBdev2", 00:18:00.001 "uuid": "8cd53695-5942-4b41-810d-06a2561d48df", 00:18:00.001 "is_configured": true, 00:18:00.001 "data_offset": 0, 00:18:00.001 "data_size": 65536 00:18:00.001 }, 00:18:00.001 { 00:18:00.001 "name": "BaseBdev3", 00:18:00.001 "uuid": "7de2eea8-9eb4-46e0-9a2c-26bfabc1ca5a", 00:18:00.001 "is_configured": true, 00:18:00.001 "data_offset": 0, 00:18:00.001 "data_size": 65536 00:18:00.001 } 00:18:00.001 ] 00:18:00.001 }' 00:18:00.001 11:59:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:00.001 11:59:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:00.940 [2024-07-21 11:59:59.679490] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.940 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:00.940 "name": "Existed_Raid", 00:18:00.940 "aliases": [ 00:18:00.940 "af4ac885-0617-4ebe-9ed6-20e7405f5daa" 00:18:00.940 ], 00:18:00.940 "product_name": "Raid Volume", 00:18:00.940 "block_size": 512, 00:18:00.940 "num_blocks": 196608, 00:18:00.940 "uuid": "af4ac885-0617-4ebe-9ed6-20e7405f5daa", 00:18:00.940 "assigned_rate_limits": { 00:18:00.940 "rw_ios_per_sec": 0, 00:18:00.940 "rw_mbytes_per_sec": 0, 00:18:00.940 "r_mbytes_per_sec": 0, 00:18:00.940 "w_mbytes_per_sec": 0 00:18:00.940 }, 00:18:00.940 "claimed": false, 00:18:00.941 "zoned": false, 00:18:00.941 "supported_io_types": { 00:18:00.941 "read": true, 00:18:00.941 "write": true, 00:18:00.941 "unmap": true, 00:18:00.941 "write_zeroes": true, 00:18:00.941 "flush": true, 00:18:00.941 "reset": true, 00:18:00.941 "compare": false, 00:18:00.941 "compare_and_write": false, 00:18:00.941 "abort": false, 00:18:00.941 "nvme_admin": false, 00:18:00.941 "nvme_io": false 00:18:00.941 }, 00:18:00.941 "memory_domains": [ 00:18:00.941 { 00:18:00.941 "dma_device_id": "system", 00:18:00.941 "dma_device_type": 1 00:18:00.941 }, 00:18:00.941 { 00:18:00.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.941 "dma_device_type": 2 00:18:00.941 }, 00:18:00.941 { 00:18:00.941 "dma_device_id": "system", 00:18:00.941 "dma_device_type": 1 00:18:00.941 }, 00:18:00.941 { 00:18:00.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.941 "dma_device_type": 2 00:18:00.941 }, 00:18:00.941 { 00:18:00.941 "dma_device_id": "system", 00:18:00.941 "dma_device_type": 1 00:18:00.941 }, 00:18:00.941 { 00:18:00.941 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.941 "dma_device_type": 2 00:18:00.941 } 00:18:00.941 ], 00:18:00.941 "driver_specific": { 00:18:00.941 "raid": { 00:18:00.941 "uuid": "af4ac885-0617-4ebe-9ed6-20e7405f5daa", 00:18:00.941 "strip_size_kb": 64, 00:18:00.941 "state": "online", 00:18:00.941 "raid_level": "concat", 00:18:00.941 "superblock": false, 00:18:00.941 "num_base_bdevs": 3, 00:18:00.941 "num_base_bdevs_discovered": 3, 00:18:00.941 "num_base_bdevs_operational": 3, 00:18:00.941 "base_bdevs_list": [ 00:18:00.941 { 00:18:00.941 "name": "BaseBdev1", 00:18:00.941 "uuid": "ef5e11dd-ab16-4097-a9a5-6a3bc2506463", 00:18:00.941 "is_configured": true, 00:18:00.941 "data_offset": 0, 00:18:00.941 "data_size": 65536 00:18:00.941 }, 00:18:00.941 { 00:18:00.941 "name": "BaseBdev2", 00:18:00.941 "uuid": "8cd53695-5942-4b41-810d-06a2561d48df", 00:18:00.941 "is_configured": true, 00:18:00.941 "data_offset": 0, 00:18:00.941 "data_size": 65536 00:18:00.941 }, 00:18:00.941 { 00:18:00.941 "name": "BaseBdev3", 00:18:00.941 "uuid": "7de2eea8-9eb4-46e0-9a2c-26bfabc1ca5a", 00:18:00.941 "is_configured": true, 00:18:00.941 "data_offset": 0, 00:18:00.941 "data_size": 65536 00:18:00.941 } 00:18:00.941 ] 00:18:00.941 } 00:18:00.941 } 00:18:00.941 }' 00:18:00.941 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.941 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:00.941 BaseBdev2 00:18:00.941 BaseBdev3' 00:18:00.941 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.941 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:00.941 11:59:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.199 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.199 "name": "BaseBdev1", 00:18:01.199 "aliases": [ 00:18:01.199 "ef5e11dd-ab16-4097-a9a5-6a3bc2506463" 00:18:01.199 ], 00:18:01.199 "product_name": "Malloc disk", 00:18:01.199 "block_size": 512, 00:18:01.199 "num_blocks": 65536, 00:18:01.199 "uuid": "ef5e11dd-ab16-4097-a9a5-6a3bc2506463", 00:18:01.199 "assigned_rate_limits": { 00:18:01.199 "rw_ios_per_sec": 0, 00:18:01.199 "rw_mbytes_per_sec": 0, 00:18:01.199 "r_mbytes_per_sec": 0, 00:18:01.199 "w_mbytes_per_sec": 0 00:18:01.199 }, 00:18:01.199 "claimed": true, 00:18:01.199 "claim_type": "exclusive_write", 00:18:01.199 "zoned": false, 00:18:01.199 "supported_io_types": { 00:18:01.199 "read": true, 00:18:01.199 "write": true, 00:18:01.199 "unmap": true, 00:18:01.199 "write_zeroes": true, 00:18:01.199 "flush": true, 00:18:01.199 "reset": true, 00:18:01.199 "compare": false, 00:18:01.199 "compare_and_write": false, 00:18:01.199 "abort": true, 00:18:01.199 "nvme_admin": false, 00:18:01.199 "nvme_io": false 00:18:01.199 }, 00:18:01.199 "memory_domains": [ 00:18:01.199 { 00:18:01.199 "dma_device_id": "system", 00:18:01.199 "dma_device_type": 1 00:18:01.199 }, 00:18:01.199 { 00:18:01.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.199 "dma_device_type": 2 00:18:01.199 } 00:18:01.199 ], 00:18:01.199 "driver_specific": {} 00:18:01.199 }' 00:18:01.199 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.457 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.457 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.457 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.457 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.457 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:01.457 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.457 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.716 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:01.716 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.716 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.716 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:01.716 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:01.716 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:01.716 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:01.974 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:01.974 "name": "BaseBdev2", 00:18:01.974 "aliases": [ 00:18:01.974 "8cd53695-5942-4b41-810d-06a2561d48df" 00:18:01.974 ], 00:18:01.974 "product_name": "Malloc disk", 00:18:01.974 "block_size": 512, 00:18:01.974 "num_blocks": 65536, 00:18:01.974 "uuid": "8cd53695-5942-4b41-810d-06a2561d48df", 00:18:01.974 "assigned_rate_limits": { 00:18:01.974 "rw_ios_per_sec": 0, 00:18:01.974 "rw_mbytes_per_sec": 0, 00:18:01.974 "r_mbytes_per_sec": 0, 00:18:01.974 "w_mbytes_per_sec": 0 00:18:01.974 }, 00:18:01.974 "claimed": true, 00:18:01.974 "claim_type": "exclusive_write", 00:18:01.974 "zoned": false, 00:18:01.974 "supported_io_types": { 00:18:01.974 "read": true, 00:18:01.974 "write": true, 00:18:01.974 "unmap": true, 00:18:01.974 "write_zeroes": true, 00:18:01.974 "flush": true, 00:18:01.974 "reset": true, 00:18:01.974 "compare": false, 00:18:01.974 "compare_and_write": false, 00:18:01.974 "abort": true, 00:18:01.974 "nvme_admin": false, 00:18:01.974 "nvme_io": false 00:18:01.974 }, 00:18:01.974 "memory_domains": [ 00:18:01.974 { 00:18:01.974 "dma_device_id": "system", 00:18:01.974 "dma_device_type": 1 00:18:01.974 }, 00:18:01.974 { 00:18:01.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:01.974 "dma_device_type": 2 00:18:01.974 } 00:18:01.974 ], 00:18:01.974 "driver_specific": {} 00:18:01.974 }' 00:18:01.974 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.974 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.974 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:01.974 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.233 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.233 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.233 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.233 12:00:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:02.233 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:02.233 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.233 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:02.503 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:02.503 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:02.503 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:02.503 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:02.762 "name": "BaseBdev3", 00:18:02.762 "aliases": [ 00:18:02.762 "7de2eea8-9eb4-46e0-9a2c-26bfabc1ca5a" 00:18:02.762 ], 00:18:02.762 "product_name": "Malloc disk", 00:18:02.762 "block_size": 512, 00:18:02.762 "num_blocks": 65536, 00:18:02.762 "uuid": "7de2eea8-9eb4-46e0-9a2c-26bfabc1ca5a", 00:18:02.762 "assigned_rate_limits": { 00:18:02.762 "rw_ios_per_sec": 0, 00:18:02.762 "rw_mbytes_per_sec": 0, 00:18:02.762 "r_mbytes_per_sec": 0, 00:18:02.762 "w_mbytes_per_sec": 0 00:18:02.762 }, 00:18:02.762 "claimed": true, 00:18:02.762 "claim_type": "exclusive_write", 00:18:02.762 "zoned": false, 00:18:02.762 "supported_io_types": { 00:18:02.762 "read": true, 00:18:02.762 "write": true, 00:18:02.762 "unmap": true, 00:18:02.762 "write_zeroes": true, 00:18:02.762 "flush": true, 00:18:02.762 "reset": true, 00:18:02.762 "compare": false, 00:18:02.762 "compare_and_write": false, 00:18:02.762 "abort": true, 00:18:02.762 "nvme_admin": false, 00:18:02.762 "nvme_io": false 00:18:02.762 }, 00:18:02.762 "memory_domains": [ 00:18:02.762 { 00:18:02.762 "dma_device_id": "system", 00:18:02.762 "dma_device_type": 1 00:18:02.762 }, 00:18:02.762 { 00:18:02.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:02.762 "dma_device_type": 2 00:18:02.762 } 00:18:02.762 ], 00:18:02.762 "driver_specific": {} 00:18:02.762 }' 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:02.762 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.018 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:03.018 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:03.019 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.019 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:03.019 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:03.019 12:00:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:03.275 [2024-07-21 12:00:02.039882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:03.275 [2024-07-21 12:00:02.039935] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.275 [2024-07-21 12:00:02.040077] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.275 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:03.532 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.532 "name": "Existed_Raid", 00:18:03.532 "uuid": "af4ac885-0617-4ebe-9ed6-20e7405f5daa", 00:18:03.532 "strip_size_kb": 64, 00:18:03.532 "state": "offline", 00:18:03.532 "raid_level": "concat", 00:18:03.532 "superblock": false, 00:18:03.532 "num_base_bdevs": 3, 00:18:03.532 "num_base_bdevs_discovered": 2, 00:18:03.532 "num_base_bdevs_operational": 2, 00:18:03.532 "base_bdevs_list": [ 00:18:03.532 { 00:18:03.532 "name": null, 00:18:03.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.532 "is_configured": false, 00:18:03.532 "data_offset": 0, 00:18:03.532 "data_size": 65536 00:18:03.532 }, 00:18:03.532 { 00:18:03.532 "name": "BaseBdev2", 00:18:03.532 "uuid": "8cd53695-5942-4b41-810d-06a2561d48df", 00:18:03.532 "is_configured": true, 00:18:03.532 "data_offset": 0, 00:18:03.532 "data_size": 65536 00:18:03.532 }, 00:18:03.532 { 00:18:03.532 "name": "BaseBdev3", 00:18:03.532 "uuid": "7de2eea8-9eb4-46e0-9a2c-26bfabc1ca5a", 00:18:03.532 "is_configured": true, 00:18:03.532 "data_offset": 0, 00:18:03.532 "data_size": 65536 00:18:03.532 } 00:18:03.532 ] 00:18:03.532 }' 00:18:03.532 12:00:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.532 12:00:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:04.464 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:04.464 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.464 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.464 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:04.464 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:04.464 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.464 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:04.721 [2024-07-21 12:00:03.549797] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:04.721 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:04.721 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:04.980 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.980 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:04.980 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:04.980 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:04.980 12:00:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:05.237 [2024-07-21 12:00:04.095105] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:05.237 [2024-07-21 12:00:04.095197] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:05.495 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:06.060 BaseBdev2 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.060 12:00:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:06.317 [ 00:18:06.318 { 00:18:06.318 "name": "BaseBdev2", 00:18:06.318 "aliases": [ 00:18:06.318 "5b26a206-87bb-47d7-8931-b9df77967069" 00:18:06.318 ], 00:18:06.318 "product_name": "Malloc disk", 00:18:06.318 "block_size": 512, 00:18:06.318 "num_blocks": 65536, 00:18:06.318 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:06.318 "assigned_rate_limits": { 00:18:06.318 "rw_ios_per_sec": 0, 00:18:06.318 "rw_mbytes_per_sec": 0, 00:18:06.318 "r_mbytes_per_sec": 0, 00:18:06.318 "w_mbytes_per_sec": 0 00:18:06.318 }, 00:18:06.318 "claimed": false, 00:18:06.318 "zoned": false, 00:18:06.318 "supported_io_types": { 00:18:06.318 "read": true, 00:18:06.318 "write": true, 00:18:06.318 "unmap": true, 00:18:06.318 "write_zeroes": true, 00:18:06.318 "flush": true, 00:18:06.318 "reset": true, 00:18:06.318 "compare": false, 00:18:06.318 "compare_and_write": false, 00:18:06.318 "abort": true, 00:18:06.318 "nvme_admin": false, 00:18:06.318 "nvme_io": false 00:18:06.318 }, 00:18:06.318 "memory_domains": [ 00:18:06.318 { 00:18:06.318 "dma_device_id": "system", 00:18:06.318 "dma_device_type": 1 00:18:06.318 }, 00:18:06.318 { 00:18:06.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:06.318 "dma_device_type": 2 00:18:06.318 } 00:18:06.318 ], 00:18:06.318 "driver_specific": {} 00:18:06.318 } 00:18:06.318 ] 00:18:06.318 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:06.318 12:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:06.318 12:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:06.318 12:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:06.575 BaseBdev3 00:18:06.575 12:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:06.575 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:06.575 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:06.575 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:06.575 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:06.575 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:06.575 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:06.833 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:07.090 [ 00:18:07.090 { 00:18:07.090 "name": "BaseBdev3", 00:18:07.090 "aliases": [ 00:18:07.090 "3b7be58f-7e36-45af-ad87-0bf1ab046114" 00:18:07.090 ], 00:18:07.090 "product_name": "Malloc disk", 00:18:07.090 "block_size": 512, 00:18:07.090 "num_blocks": 65536, 00:18:07.090 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:07.090 "assigned_rate_limits": { 00:18:07.090 "rw_ios_per_sec": 0, 00:18:07.090 "rw_mbytes_per_sec": 0, 00:18:07.090 "r_mbytes_per_sec": 0, 00:18:07.090 "w_mbytes_per_sec": 0 00:18:07.090 }, 00:18:07.090 "claimed": false, 00:18:07.090 "zoned": false, 00:18:07.090 "supported_io_types": { 00:18:07.090 "read": true, 00:18:07.090 "write": true, 00:18:07.090 "unmap": true, 00:18:07.090 "write_zeroes": true, 00:18:07.090 "flush": true, 00:18:07.090 "reset": true, 00:18:07.090 "compare": false, 00:18:07.090 "compare_and_write": false, 00:18:07.090 "abort": true, 00:18:07.090 "nvme_admin": false, 00:18:07.090 "nvme_io": false 00:18:07.090 }, 00:18:07.090 "memory_domains": [ 00:18:07.090 { 00:18:07.090 "dma_device_id": "system", 00:18:07.090 "dma_device_type": 1 00:18:07.090 }, 00:18:07.090 { 00:18:07.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.090 "dma_device_type": 2 00:18:07.090 } 00:18:07.090 ], 00:18:07.090 "driver_specific": {} 00:18:07.090 } 00:18:07.090 ] 00:18:07.090 12:00:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:07.090 12:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:07.090 12:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:07.090 12:00:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:07.348 [2024-07-21 12:00:06.001252] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.348 [2024-07-21 12:00:06.001390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.348 [2024-07-21 12:00:06.001464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:07.348 [2024-07-21 12:00:06.003705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.348 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:07.605 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.605 "name": "Existed_Raid", 00:18:07.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.605 "strip_size_kb": 64, 00:18:07.605 "state": "configuring", 00:18:07.605 "raid_level": "concat", 00:18:07.605 "superblock": false, 00:18:07.605 "num_base_bdevs": 3, 00:18:07.605 "num_base_bdevs_discovered": 2, 00:18:07.605 "num_base_bdevs_operational": 3, 00:18:07.605 "base_bdevs_list": [ 00:18:07.605 { 00:18:07.605 "name": "BaseBdev1", 00:18:07.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.605 "is_configured": false, 00:18:07.605 "data_offset": 0, 00:18:07.605 "data_size": 0 00:18:07.605 }, 00:18:07.605 { 00:18:07.605 "name": "BaseBdev2", 00:18:07.605 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:07.605 "is_configured": true, 00:18:07.605 "data_offset": 0, 00:18:07.605 "data_size": 65536 00:18:07.605 }, 00:18:07.605 { 00:18:07.605 "name": "BaseBdev3", 00:18:07.605 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:07.605 "is_configured": true, 00:18:07.605 "data_offset": 0, 00:18:07.605 "data_size": 65536 00:18:07.605 } 00:18:07.605 ] 00:18:07.605 }' 00:18:07.605 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.605 12:00:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.172 12:00:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:08.430 [2024-07-21 12:00:07.141433] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.430 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.688 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.688 "name": "Existed_Raid", 00:18:08.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.688 "strip_size_kb": 64, 00:18:08.688 "state": "configuring", 00:18:08.688 "raid_level": "concat", 00:18:08.688 "superblock": false, 00:18:08.688 "num_base_bdevs": 3, 00:18:08.688 "num_base_bdevs_discovered": 1, 00:18:08.688 "num_base_bdevs_operational": 3, 00:18:08.688 "base_bdevs_list": [ 00:18:08.688 { 00:18:08.688 "name": "BaseBdev1", 00:18:08.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.688 "is_configured": false, 00:18:08.688 "data_offset": 0, 00:18:08.688 "data_size": 0 00:18:08.688 }, 00:18:08.688 { 00:18:08.688 "name": null, 00:18:08.688 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:08.688 "is_configured": false, 00:18:08.688 "data_offset": 0, 00:18:08.688 "data_size": 65536 00:18:08.688 }, 00:18:08.688 { 00:18:08.688 "name": "BaseBdev3", 00:18:08.688 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:08.688 "is_configured": true, 00:18:08.688 "data_offset": 0, 00:18:08.688 "data_size": 65536 00:18:08.688 } 00:18:08.688 ] 00:18:08.688 }' 00:18:08.688 12:00:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.688 12:00:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.253 12:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.253 12:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:09.511 12:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:09.511 12:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:09.768 [2024-07-21 12:00:08.560005] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.768 BaseBdev1 00:18:09.768 12:00:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:09.768 12:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:09.768 12:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:09.768 12:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:09.768 12:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:09.768 12:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:09.768 12:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.035 12:00:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:10.309 [ 00:18:10.309 { 00:18:10.309 "name": "BaseBdev1", 00:18:10.309 "aliases": [ 00:18:10.309 "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44" 00:18:10.309 ], 00:18:10.309 "product_name": "Malloc disk", 00:18:10.309 "block_size": 512, 00:18:10.309 "num_blocks": 65536, 00:18:10.309 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:10.309 "assigned_rate_limits": { 00:18:10.309 "rw_ios_per_sec": 0, 00:18:10.309 "rw_mbytes_per_sec": 0, 00:18:10.309 "r_mbytes_per_sec": 0, 00:18:10.309 "w_mbytes_per_sec": 0 00:18:10.309 }, 00:18:10.309 "claimed": true, 00:18:10.309 "claim_type": "exclusive_write", 00:18:10.309 "zoned": false, 00:18:10.309 "supported_io_types": { 00:18:10.309 "read": true, 00:18:10.309 "write": true, 00:18:10.309 "unmap": true, 00:18:10.309 "write_zeroes": true, 00:18:10.309 "flush": true, 00:18:10.309 "reset": true, 00:18:10.309 "compare": false, 00:18:10.309 "compare_and_write": false, 00:18:10.309 "abort": true, 00:18:10.309 "nvme_admin": false, 00:18:10.309 "nvme_io": false 00:18:10.309 }, 00:18:10.309 "memory_domains": [ 00:18:10.309 { 00:18:10.309 "dma_device_id": "system", 00:18:10.309 "dma_device_type": 1 00:18:10.309 }, 00:18:10.309 { 00:18:10.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.309 "dma_device_type": 2 00:18:10.309 } 00:18:10.309 ], 00:18:10.309 "driver_specific": {} 00:18:10.309 } 00:18:10.309 ] 00:18:10.309 12:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:10.309 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:10.309 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:10.309 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:10.309 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:10.309 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:10.309 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:10.310 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:10.310 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:10.310 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:10.310 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:10.310 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.310 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.575 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:10.575 "name": "Existed_Raid", 00:18:10.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.575 "strip_size_kb": 64, 00:18:10.575 "state": "configuring", 00:18:10.575 "raid_level": "concat", 00:18:10.575 "superblock": false, 00:18:10.575 "num_base_bdevs": 3, 00:18:10.575 "num_base_bdevs_discovered": 2, 00:18:10.575 "num_base_bdevs_operational": 3, 00:18:10.575 "base_bdevs_list": [ 00:18:10.575 { 00:18:10.575 "name": "BaseBdev1", 00:18:10.575 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:10.575 "is_configured": true, 00:18:10.575 "data_offset": 0, 00:18:10.575 "data_size": 65536 00:18:10.575 }, 00:18:10.575 { 00:18:10.575 "name": null, 00:18:10.575 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:10.575 "is_configured": false, 00:18:10.575 "data_offset": 0, 00:18:10.575 "data_size": 65536 00:18:10.575 }, 00:18:10.575 { 00:18:10.575 "name": "BaseBdev3", 00:18:10.575 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:10.575 "is_configured": true, 00:18:10.575 "data_offset": 0, 00:18:10.575 "data_size": 65536 00:18:10.575 } 00:18:10.575 ] 00:18:10.575 }' 00:18:10.575 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:10.575 12:00:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.141 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.141 12:00:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:11.399 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:11.399 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:11.656 [2024-07-21 12:00:10.467563] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.656 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.914 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:11.914 "name": "Existed_Raid", 00:18:11.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.914 "strip_size_kb": 64, 00:18:11.914 "state": "configuring", 00:18:11.914 "raid_level": "concat", 00:18:11.914 "superblock": false, 00:18:11.914 "num_base_bdevs": 3, 00:18:11.914 "num_base_bdevs_discovered": 1, 00:18:11.914 "num_base_bdevs_operational": 3, 00:18:11.914 "base_bdevs_list": [ 00:18:11.914 { 00:18:11.914 "name": "BaseBdev1", 00:18:11.914 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:11.914 "is_configured": true, 00:18:11.914 "data_offset": 0, 00:18:11.914 "data_size": 65536 00:18:11.914 }, 00:18:11.914 { 00:18:11.914 "name": null, 00:18:11.914 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:11.914 "is_configured": false, 00:18:11.914 "data_offset": 0, 00:18:11.914 "data_size": 65536 00:18:11.914 }, 00:18:11.914 { 00:18:11.914 "name": null, 00:18:11.914 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:11.914 "is_configured": false, 00:18:11.914 "data_offset": 0, 00:18:11.914 "data_size": 65536 00:18:11.914 } 00:18:11.914 ] 00:18:11.914 }' 00:18:11.914 12:00:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:11.914 12:00:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.857 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.857 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:12.857 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:12.857 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:13.114 [2024-07-21 12:00:11.831867] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.115 12:00:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.373 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:13.373 "name": "Existed_Raid", 00:18:13.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.373 "strip_size_kb": 64, 00:18:13.373 "state": "configuring", 00:18:13.373 "raid_level": "concat", 00:18:13.373 "superblock": false, 00:18:13.373 "num_base_bdevs": 3, 00:18:13.373 "num_base_bdevs_discovered": 2, 00:18:13.373 "num_base_bdevs_operational": 3, 00:18:13.373 "base_bdevs_list": [ 00:18:13.373 { 00:18:13.373 "name": "BaseBdev1", 00:18:13.373 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:13.373 "is_configured": true, 00:18:13.373 "data_offset": 0, 00:18:13.373 "data_size": 65536 00:18:13.373 }, 00:18:13.373 { 00:18:13.373 "name": null, 00:18:13.373 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:13.373 "is_configured": false, 00:18:13.373 "data_offset": 0, 00:18:13.373 "data_size": 65536 00:18:13.373 }, 00:18:13.373 { 00:18:13.373 "name": "BaseBdev3", 00:18:13.373 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:13.373 "is_configured": true, 00:18:13.373 "data_offset": 0, 00:18:13.373 "data_size": 65536 00:18:13.373 } 00:18:13.373 ] 00:18:13.373 }' 00:18:13.373 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:13.373 12:00:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.939 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.939 12:00:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:14.196 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:14.196 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:14.454 [2024-07-21 12:00:13.299246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.712 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.970 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.970 "name": "Existed_Raid", 00:18:14.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.970 "strip_size_kb": 64, 00:18:14.970 "state": "configuring", 00:18:14.970 "raid_level": "concat", 00:18:14.970 "superblock": false, 00:18:14.970 "num_base_bdevs": 3, 00:18:14.970 "num_base_bdevs_discovered": 1, 00:18:14.970 "num_base_bdevs_operational": 3, 00:18:14.970 "base_bdevs_list": [ 00:18:14.970 { 00:18:14.971 "name": null, 00:18:14.971 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:14.971 "is_configured": false, 00:18:14.971 "data_offset": 0, 00:18:14.971 "data_size": 65536 00:18:14.971 }, 00:18:14.971 { 00:18:14.971 "name": null, 00:18:14.971 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:14.971 "is_configured": false, 00:18:14.971 "data_offset": 0, 00:18:14.971 "data_size": 65536 00:18:14.971 }, 00:18:14.971 { 00:18:14.971 "name": "BaseBdev3", 00:18:14.971 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:14.971 "is_configured": true, 00:18:14.971 "data_offset": 0, 00:18:14.971 "data_size": 65536 00:18:14.971 } 00:18:14.971 ] 00:18:14.971 }' 00:18:14.971 12:00:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.971 12:00:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.536 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.536 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:15.793 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:15.793 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:16.051 [2024-07-21 12:00:14.709235] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.051 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.308 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.308 "name": "Existed_Raid", 00:18:16.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.308 "strip_size_kb": 64, 00:18:16.308 "state": "configuring", 00:18:16.308 "raid_level": "concat", 00:18:16.308 "superblock": false, 00:18:16.308 "num_base_bdevs": 3, 00:18:16.308 "num_base_bdevs_discovered": 2, 00:18:16.308 "num_base_bdevs_operational": 3, 00:18:16.308 "base_bdevs_list": [ 00:18:16.308 { 00:18:16.308 "name": null, 00:18:16.308 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:16.308 "is_configured": false, 00:18:16.308 "data_offset": 0, 00:18:16.308 "data_size": 65536 00:18:16.308 }, 00:18:16.308 { 00:18:16.308 "name": "BaseBdev2", 00:18:16.308 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:16.308 "is_configured": true, 00:18:16.308 "data_offset": 0, 00:18:16.308 "data_size": 65536 00:18:16.308 }, 00:18:16.308 { 00:18:16.308 "name": "BaseBdev3", 00:18:16.308 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:16.308 "is_configured": true, 00:18:16.308 "data_offset": 0, 00:18:16.308 "data_size": 65536 00:18:16.308 } 00:18:16.308 ] 00:18:16.308 }' 00:18:16.308 12:00:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.308 12:00:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.873 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:16.873 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.130 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:17.130 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:17.130 12:00:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.387 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d683fdd0-99be-4f5b-b1e4-c8124b1f7f44 00:18:17.644 [2024-07-21 12:00:16.330227] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:17.644 [2024-07-21 12:00:16.330304] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:17.644 [2024-07-21 12:00:16.330315] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:17.644 [2024-07-21 12:00:16.330409] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:17.644 [2024-07-21 12:00:16.330788] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:17.644 [2024-07-21 12:00:16.330815] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:18:17.644 [2024-07-21 12:00:16.331028] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.644 NewBaseBdev 00:18:17.644 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:17.644 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:17.644 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:17.644 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:18:17.644 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:17.644 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:17.644 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.901 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:18.159 [ 00:18:18.159 { 00:18:18.159 "name": "NewBaseBdev", 00:18:18.159 "aliases": [ 00:18:18.159 "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44" 00:18:18.159 ], 00:18:18.159 "product_name": "Malloc disk", 00:18:18.159 "block_size": 512, 00:18:18.159 "num_blocks": 65536, 00:18:18.159 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:18.159 "assigned_rate_limits": { 00:18:18.159 "rw_ios_per_sec": 0, 00:18:18.159 "rw_mbytes_per_sec": 0, 00:18:18.159 "r_mbytes_per_sec": 0, 00:18:18.159 "w_mbytes_per_sec": 0 00:18:18.159 }, 00:18:18.159 "claimed": true, 00:18:18.159 "claim_type": "exclusive_write", 00:18:18.159 "zoned": false, 00:18:18.159 "supported_io_types": { 00:18:18.159 "read": true, 00:18:18.159 "write": true, 00:18:18.159 "unmap": true, 00:18:18.159 "write_zeroes": true, 00:18:18.159 "flush": true, 00:18:18.159 "reset": true, 00:18:18.159 "compare": false, 00:18:18.159 "compare_and_write": false, 00:18:18.159 "abort": true, 00:18:18.159 "nvme_admin": false, 00:18:18.159 "nvme_io": false 00:18:18.159 }, 00:18:18.159 "memory_domains": [ 00:18:18.159 { 00:18:18.159 "dma_device_id": "system", 00:18:18.159 "dma_device_type": 1 00:18:18.159 }, 00:18:18.159 { 00:18:18.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:18.159 "dma_device_type": 2 00:18:18.159 } 00:18:18.159 ], 00:18:18.159 "driver_specific": {} 00:18:18.159 } 00:18:18.159 ] 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.159 12:00:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.417 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.417 "name": "Existed_Raid", 00:18:18.417 "uuid": "0213aeef-4fc9-41a0-bf7d-4ab7ea53f252", 00:18:18.417 "strip_size_kb": 64, 00:18:18.417 "state": "online", 00:18:18.417 "raid_level": "concat", 00:18:18.417 "superblock": false, 00:18:18.417 "num_base_bdevs": 3, 00:18:18.417 "num_base_bdevs_discovered": 3, 00:18:18.417 "num_base_bdevs_operational": 3, 00:18:18.417 "base_bdevs_list": [ 00:18:18.417 { 00:18:18.417 "name": "NewBaseBdev", 00:18:18.417 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:18.417 "is_configured": true, 00:18:18.417 "data_offset": 0, 00:18:18.417 "data_size": 65536 00:18:18.417 }, 00:18:18.417 { 00:18:18.417 "name": "BaseBdev2", 00:18:18.417 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:18.417 "is_configured": true, 00:18:18.417 "data_offset": 0, 00:18:18.417 "data_size": 65536 00:18:18.417 }, 00:18:18.417 { 00:18:18.417 "name": "BaseBdev3", 00:18:18.417 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:18.417 "is_configured": true, 00:18:18.417 "data_offset": 0, 00:18:18.417 "data_size": 65536 00:18:18.417 } 00:18:18.417 ] 00:18:18.417 }' 00:18:18.417 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.417 12:00:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:18.982 12:00:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:19.241 [2024-07-21 12:00:17.995485] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.241 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:19.241 "name": "Existed_Raid", 00:18:19.241 "aliases": [ 00:18:19.241 "0213aeef-4fc9-41a0-bf7d-4ab7ea53f252" 00:18:19.241 ], 00:18:19.241 "product_name": "Raid Volume", 00:18:19.241 "block_size": 512, 00:18:19.241 "num_blocks": 196608, 00:18:19.241 "uuid": "0213aeef-4fc9-41a0-bf7d-4ab7ea53f252", 00:18:19.241 "assigned_rate_limits": { 00:18:19.241 "rw_ios_per_sec": 0, 00:18:19.241 "rw_mbytes_per_sec": 0, 00:18:19.241 "r_mbytes_per_sec": 0, 00:18:19.241 "w_mbytes_per_sec": 0 00:18:19.241 }, 00:18:19.241 "claimed": false, 00:18:19.241 "zoned": false, 00:18:19.241 "supported_io_types": { 00:18:19.241 "read": true, 00:18:19.241 "write": true, 00:18:19.241 "unmap": true, 00:18:19.241 "write_zeroes": true, 00:18:19.241 "flush": true, 00:18:19.241 "reset": true, 00:18:19.241 "compare": false, 00:18:19.241 "compare_and_write": false, 00:18:19.241 "abort": false, 00:18:19.241 "nvme_admin": false, 00:18:19.241 "nvme_io": false 00:18:19.241 }, 00:18:19.241 "memory_domains": [ 00:18:19.241 { 00:18:19.241 "dma_device_id": "system", 00:18:19.241 "dma_device_type": 1 00:18:19.241 }, 00:18:19.241 { 00:18:19.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.241 "dma_device_type": 2 00:18:19.241 }, 00:18:19.241 { 00:18:19.241 "dma_device_id": "system", 00:18:19.241 "dma_device_type": 1 00:18:19.241 }, 00:18:19.241 { 00:18:19.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.241 "dma_device_type": 2 00:18:19.241 }, 00:18:19.241 { 00:18:19.241 "dma_device_id": "system", 00:18:19.241 "dma_device_type": 1 00:18:19.241 }, 00:18:19.241 { 00:18:19.241 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.241 "dma_device_type": 2 00:18:19.241 } 00:18:19.241 ], 00:18:19.241 "driver_specific": { 00:18:19.241 "raid": { 00:18:19.241 "uuid": "0213aeef-4fc9-41a0-bf7d-4ab7ea53f252", 00:18:19.241 "strip_size_kb": 64, 00:18:19.241 "state": "online", 00:18:19.241 "raid_level": "concat", 00:18:19.241 "superblock": false, 00:18:19.241 "num_base_bdevs": 3, 00:18:19.241 "num_base_bdevs_discovered": 3, 00:18:19.241 "num_base_bdevs_operational": 3, 00:18:19.241 "base_bdevs_list": [ 00:18:19.241 { 00:18:19.241 "name": "NewBaseBdev", 00:18:19.241 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:19.241 "is_configured": true, 00:18:19.241 "data_offset": 0, 00:18:19.241 "data_size": 65536 00:18:19.241 }, 00:18:19.241 { 00:18:19.241 "name": "BaseBdev2", 00:18:19.241 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:19.241 "is_configured": true, 00:18:19.241 "data_offset": 0, 00:18:19.241 "data_size": 65536 00:18:19.241 }, 00:18:19.241 { 00:18:19.241 "name": "BaseBdev3", 00:18:19.241 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:19.241 "is_configured": true, 00:18:19.241 "data_offset": 0, 00:18:19.241 "data_size": 65536 00:18:19.241 } 00:18:19.241 ] 00:18:19.241 } 00:18:19.241 } 00:18:19.241 }' 00:18:19.241 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.241 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:19.241 BaseBdev2 00:18:19.241 BaseBdev3' 00:18:19.241 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:19.241 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:19.241 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:19.499 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:19.499 "name": "NewBaseBdev", 00:18:19.499 "aliases": [ 00:18:19.499 "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44" 00:18:19.499 ], 00:18:19.499 "product_name": "Malloc disk", 00:18:19.499 "block_size": 512, 00:18:19.499 "num_blocks": 65536, 00:18:19.499 "uuid": "d683fdd0-99be-4f5b-b1e4-c8124b1f7f44", 00:18:19.499 "assigned_rate_limits": { 00:18:19.499 "rw_ios_per_sec": 0, 00:18:19.499 "rw_mbytes_per_sec": 0, 00:18:19.499 "r_mbytes_per_sec": 0, 00:18:19.499 "w_mbytes_per_sec": 0 00:18:19.499 }, 00:18:19.499 "claimed": true, 00:18:19.499 "claim_type": "exclusive_write", 00:18:19.499 "zoned": false, 00:18:19.499 "supported_io_types": { 00:18:19.499 "read": true, 00:18:19.499 "write": true, 00:18:19.499 "unmap": true, 00:18:19.499 "write_zeroes": true, 00:18:19.499 "flush": true, 00:18:19.499 "reset": true, 00:18:19.499 "compare": false, 00:18:19.499 "compare_and_write": false, 00:18:19.499 "abort": true, 00:18:19.499 "nvme_admin": false, 00:18:19.499 "nvme_io": false 00:18:19.499 }, 00:18:19.499 "memory_domains": [ 00:18:19.499 { 00:18:19.499 "dma_device_id": "system", 00:18:19.499 "dma_device_type": 1 00:18:19.499 }, 00:18:19.499 { 00:18:19.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.499 "dma_device_type": 2 00:18:19.499 } 00:18:19.499 ], 00:18:19.499 "driver_specific": {} 00:18:19.499 }' 00:18:19.499 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.499 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:19.756 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.014 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.014 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.014 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.014 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:20.014 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:20.272 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:20.272 "name": "BaseBdev2", 00:18:20.272 "aliases": [ 00:18:20.272 "5b26a206-87bb-47d7-8931-b9df77967069" 00:18:20.272 ], 00:18:20.272 "product_name": "Malloc disk", 00:18:20.272 "block_size": 512, 00:18:20.272 "num_blocks": 65536, 00:18:20.272 "uuid": "5b26a206-87bb-47d7-8931-b9df77967069", 00:18:20.272 "assigned_rate_limits": { 00:18:20.272 "rw_ios_per_sec": 0, 00:18:20.272 "rw_mbytes_per_sec": 0, 00:18:20.272 "r_mbytes_per_sec": 0, 00:18:20.272 "w_mbytes_per_sec": 0 00:18:20.272 }, 00:18:20.272 "claimed": true, 00:18:20.272 "claim_type": "exclusive_write", 00:18:20.272 "zoned": false, 00:18:20.272 "supported_io_types": { 00:18:20.272 "read": true, 00:18:20.272 "write": true, 00:18:20.272 "unmap": true, 00:18:20.272 "write_zeroes": true, 00:18:20.272 "flush": true, 00:18:20.272 "reset": true, 00:18:20.272 "compare": false, 00:18:20.272 "compare_and_write": false, 00:18:20.272 "abort": true, 00:18:20.272 "nvme_admin": false, 00:18:20.272 "nvme_io": false 00:18:20.272 }, 00:18:20.272 "memory_domains": [ 00:18:20.272 { 00:18:20.272 "dma_device_id": "system", 00:18:20.272 "dma_device_type": 1 00:18:20.272 }, 00:18:20.272 { 00:18:20.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:20.272 "dma_device_type": 2 00:18:20.272 } 00:18:20.272 ], 00:18:20.272 "driver_specific": {} 00:18:20.272 }' 00:18:20.272 12:00:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.272 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:20.272 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:20.272 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:20.529 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:20.787 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:20.787 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:21.045 "name": "BaseBdev3", 00:18:21.045 "aliases": [ 00:18:21.045 "3b7be58f-7e36-45af-ad87-0bf1ab046114" 00:18:21.045 ], 00:18:21.045 "product_name": "Malloc disk", 00:18:21.045 "block_size": 512, 00:18:21.045 "num_blocks": 65536, 00:18:21.045 "uuid": "3b7be58f-7e36-45af-ad87-0bf1ab046114", 00:18:21.045 "assigned_rate_limits": { 00:18:21.045 "rw_ios_per_sec": 0, 00:18:21.045 "rw_mbytes_per_sec": 0, 00:18:21.045 "r_mbytes_per_sec": 0, 00:18:21.045 "w_mbytes_per_sec": 0 00:18:21.045 }, 00:18:21.045 "claimed": true, 00:18:21.045 "claim_type": "exclusive_write", 00:18:21.045 "zoned": false, 00:18:21.045 "supported_io_types": { 00:18:21.045 "read": true, 00:18:21.045 "write": true, 00:18:21.045 "unmap": true, 00:18:21.045 "write_zeroes": true, 00:18:21.045 "flush": true, 00:18:21.045 "reset": true, 00:18:21.045 "compare": false, 00:18:21.045 "compare_and_write": false, 00:18:21.045 "abort": true, 00:18:21.045 "nvme_admin": false, 00:18:21.045 "nvme_io": false 00:18:21.045 }, 00:18:21.045 "memory_domains": [ 00:18:21.045 { 00:18:21.045 "dma_device_id": "system", 00:18:21.045 "dma_device_type": 1 00:18:21.045 }, 00:18:21.045 { 00:18:21.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.045 "dma_device_type": 2 00:18:21.045 } 00:18:21.045 ], 00:18:21.045 "driver_specific": {} 00:18:21.045 }' 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.045 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:21.303 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:21.303 12:00:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.303 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:21.303 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:21.303 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:21.561 [2024-07-21 12:00:20.315305] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.561 [2024-07-21 12:00:20.315346] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:21.561 [2024-07-21 12:00:20.315445] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:21.561 [2024-07-21 12:00:20.315536] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:21.561 [2024-07-21 12:00:20.315550] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 138657 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 138657 ']' 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 138657 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 138657 00:18:21.561 killing process with pid 138657 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 138657' 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 138657 00:18:21.561 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 138657 00:18:21.561 [2024-07-21 12:00:20.354217] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:21.561 [2024-07-21 12:00:20.385290] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:21.819 12:00:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:18:21.819 00:18:21.819 real 0m30.108s 00:18:21.819 user 0m57.482s 00:18:21.819 sys 0m3.391s 00:18:21.819 ************************************ 00:18:21.819 END TEST raid_state_function_test 00:18:21.819 ************************************ 00:18:21.819 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:21.819 12:00:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.819 12:00:20 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:18:21.819 12:00:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:21.819 12:00:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:21.819 12:00:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.077 ************************************ 00:18:22.077 START TEST raid_state_function_test_sb 00:18:22.077 ************************************ 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 3 true 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.077 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=139649 00:18:22.078 Process raid pid: 139649 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 139649' 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 139649 /var/tmp/spdk-raid.sock 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 139649 ']' 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:22.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:22.078 12:00:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.078 [2024-07-21 12:00:20.764937] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:22.078 [2024-07-21 12:00:20.765199] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.078 [2024-07-21 12:00:20.933561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.336 [2024-07-21 12:00:21.034300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.336 [2024-07-21 12:00:21.092663] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.903 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:22.903 12:00:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:18:22.903 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:23.161 [2024-07-21 12:00:21.939066] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.161 [2024-07-21 12:00:21.939183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.161 [2024-07-21 12:00:21.939205] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.161 [2024-07-21 12:00:21.939230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.161 [2024-07-21 12:00:21.939239] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.161 [2024-07-21 12:00:21.939280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.161 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.162 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.162 12:00:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.420 12:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:23.420 "name": "Existed_Raid", 00:18:23.420 "uuid": "07bf4406-0f57-4930-b842-138ae9faa50c", 00:18:23.420 "strip_size_kb": 64, 00:18:23.420 "state": "configuring", 00:18:23.420 "raid_level": "concat", 00:18:23.420 "superblock": true, 00:18:23.420 "num_base_bdevs": 3, 00:18:23.420 "num_base_bdevs_discovered": 0, 00:18:23.420 "num_base_bdevs_operational": 3, 00:18:23.420 "base_bdevs_list": [ 00:18:23.420 { 00:18:23.420 "name": "BaseBdev1", 00:18:23.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.420 "is_configured": false, 00:18:23.420 "data_offset": 0, 00:18:23.420 "data_size": 0 00:18:23.420 }, 00:18:23.420 { 00:18:23.420 "name": "BaseBdev2", 00:18:23.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.420 "is_configured": false, 00:18:23.420 "data_offset": 0, 00:18:23.420 "data_size": 0 00:18:23.420 }, 00:18:23.420 { 00:18:23.420 "name": "BaseBdev3", 00:18:23.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.420 "is_configured": false, 00:18:23.420 "data_offset": 0, 00:18:23.420 "data_size": 0 00:18:23.420 } 00:18:23.420 ] 00:18:23.420 }' 00:18:23.420 12:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:23.420 12:00:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.985 12:00:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:24.244 [2024-07-21 12:00:23.095191] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:24.244 [2024-07-21 12:00:23.095244] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:18:24.502 12:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:24.502 [2024-07-21 12:00:23.323237] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:24.502 [2024-07-21 12:00:23.323351] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:24.502 [2024-07-21 12:00:23.323366] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.502 [2024-07-21 12:00:23.323386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.502 [2024-07-21 12:00:23.323394] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.502 [2024-07-21 12:00:23.323419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.502 12:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:24.759 [2024-07-21 12:00:23.614385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.759 BaseBdev1 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:25.016 12:00:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:25.274 [ 00:18:25.274 { 00:18:25.274 "name": "BaseBdev1", 00:18:25.274 "aliases": [ 00:18:25.274 "041b08d6-98c3-4872-866e-7d659c7bc84a" 00:18:25.274 ], 00:18:25.274 "product_name": "Malloc disk", 00:18:25.274 "block_size": 512, 00:18:25.274 "num_blocks": 65536, 00:18:25.274 "uuid": "041b08d6-98c3-4872-866e-7d659c7bc84a", 00:18:25.274 "assigned_rate_limits": { 00:18:25.274 "rw_ios_per_sec": 0, 00:18:25.274 "rw_mbytes_per_sec": 0, 00:18:25.274 "r_mbytes_per_sec": 0, 00:18:25.274 "w_mbytes_per_sec": 0 00:18:25.274 }, 00:18:25.274 "claimed": true, 00:18:25.274 "claim_type": "exclusive_write", 00:18:25.274 "zoned": false, 00:18:25.274 "supported_io_types": { 00:18:25.274 "read": true, 00:18:25.274 "write": true, 00:18:25.274 "unmap": true, 00:18:25.274 "write_zeroes": true, 00:18:25.274 "flush": true, 00:18:25.274 "reset": true, 00:18:25.274 "compare": false, 00:18:25.274 "compare_and_write": false, 00:18:25.274 "abort": true, 00:18:25.274 "nvme_admin": false, 00:18:25.274 "nvme_io": false 00:18:25.274 }, 00:18:25.274 "memory_domains": [ 00:18:25.274 { 00:18:25.274 "dma_device_id": "system", 00:18:25.274 "dma_device_type": 1 00:18:25.274 }, 00:18:25.274 { 00:18:25.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.274 "dma_device_type": 2 00:18:25.274 } 00:18:25.274 ], 00:18:25.274 "driver_specific": {} 00:18:25.274 } 00:18:25.274 ] 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.274 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.531 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.531 "name": "Existed_Raid", 00:18:25.531 "uuid": "c43eed3f-ba85-40de-a363-4fac1f1e7926", 00:18:25.531 "strip_size_kb": 64, 00:18:25.531 "state": "configuring", 00:18:25.531 "raid_level": "concat", 00:18:25.531 "superblock": true, 00:18:25.531 "num_base_bdevs": 3, 00:18:25.531 "num_base_bdevs_discovered": 1, 00:18:25.531 "num_base_bdevs_operational": 3, 00:18:25.531 "base_bdevs_list": [ 00:18:25.531 { 00:18:25.531 "name": "BaseBdev1", 00:18:25.531 "uuid": "041b08d6-98c3-4872-866e-7d659c7bc84a", 00:18:25.531 "is_configured": true, 00:18:25.531 "data_offset": 2048, 00:18:25.531 "data_size": 63488 00:18:25.531 }, 00:18:25.531 { 00:18:25.531 "name": "BaseBdev2", 00:18:25.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.531 "is_configured": false, 00:18:25.531 "data_offset": 0, 00:18:25.531 "data_size": 0 00:18:25.531 }, 00:18:25.531 { 00:18:25.531 "name": "BaseBdev3", 00:18:25.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.531 "is_configured": false, 00:18:25.531 "data_offset": 0, 00:18:25.531 "data_size": 0 00:18:25.531 } 00:18:25.531 ] 00:18:25.531 }' 00:18:25.789 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.789 12:00:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.355 12:00:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:26.355 [2024-07-21 12:00:25.206806] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:26.355 [2024-07-21 12:00:25.206898] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:26.613 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:26.613 [2024-07-21 12:00:25.479019] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:26.871 [2024-07-21 12:00:25.481309] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:26.871 [2024-07-21 12:00:25.481392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:26.871 [2024-07-21 12:00:25.481421] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:26.871 [2024-07-21 12:00:25.481466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:26.871 "name": "Existed_Raid", 00:18:26.871 "uuid": "ae6e3146-3ac8-49fe-8fe9-52acbd8b327a", 00:18:26.871 "strip_size_kb": 64, 00:18:26.871 "state": "configuring", 00:18:26.871 "raid_level": "concat", 00:18:26.871 "superblock": true, 00:18:26.871 "num_base_bdevs": 3, 00:18:26.871 "num_base_bdevs_discovered": 1, 00:18:26.871 "num_base_bdevs_operational": 3, 00:18:26.871 "base_bdevs_list": [ 00:18:26.871 { 00:18:26.871 "name": "BaseBdev1", 00:18:26.871 "uuid": "041b08d6-98c3-4872-866e-7d659c7bc84a", 00:18:26.871 "is_configured": true, 00:18:26.871 "data_offset": 2048, 00:18:26.871 "data_size": 63488 00:18:26.871 }, 00:18:26.871 { 00:18:26.871 "name": "BaseBdev2", 00:18:26.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.871 "is_configured": false, 00:18:26.871 "data_offset": 0, 00:18:26.871 "data_size": 0 00:18:26.871 }, 00:18:26.871 { 00:18:26.871 "name": "BaseBdev3", 00:18:26.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.871 "is_configured": false, 00:18:26.871 "data_offset": 0, 00:18:26.871 "data_size": 0 00:18:26.871 } 00:18:26.871 ] 00:18:26.871 }' 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:26.871 12:00:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:27.803 [2024-07-21 12:00:26.554689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:27.803 BaseBdev2 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:27.803 12:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:28.061 12:00:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.319 [ 00:18:28.319 { 00:18:28.319 "name": "BaseBdev2", 00:18:28.319 "aliases": [ 00:18:28.319 "d7408731-c1d2-4170-8196-df65c4a8e715" 00:18:28.319 ], 00:18:28.319 "product_name": "Malloc disk", 00:18:28.319 "block_size": 512, 00:18:28.319 "num_blocks": 65536, 00:18:28.319 "uuid": "d7408731-c1d2-4170-8196-df65c4a8e715", 00:18:28.319 "assigned_rate_limits": { 00:18:28.319 "rw_ios_per_sec": 0, 00:18:28.319 "rw_mbytes_per_sec": 0, 00:18:28.319 "r_mbytes_per_sec": 0, 00:18:28.319 "w_mbytes_per_sec": 0 00:18:28.319 }, 00:18:28.319 "claimed": true, 00:18:28.319 "claim_type": "exclusive_write", 00:18:28.319 "zoned": false, 00:18:28.319 "supported_io_types": { 00:18:28.319 "read": true, 00:18:28.319 "write": true, 00:18:28.319 "unmap": true, 00:18:28.319 "write_zeroes": true, 00:18:28.319 "flush": true, 00:18:28.319 "reset": true, 00:18:28.319 "compare": false, 00:18:28.319 "compare_and_write": false, 00:18:28.319 "abort": true, 00:18:28.319 "nvme_admin": false, 00:18:28.319 "nvme_io": false 00:18:28.319 }, 00:18:28.319 "memory_domains": [ 00:18:28.319 { 00:18:28.319 "dma_device_id": "system", 00:18:28.319 "dma_device_type": 1 00:18:28.319 }, 00:18:28.319 { 00:18:28.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.319 "dma_device_type": 2 00:18:28.319 } 00:18:28.319 ], 00:18:28.319 "driver_specific": {} 00:18:28.319 } 00:18:28.319 ] 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:28.319 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.577 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:28.577 "name": "Existed_Raid", 00:18:28.577 "uuid": "ae6e3146-3ac8-49fe-8fe9-52acbd8b327a", 00:18:28.577 "strip_size_kb": 64, 00:18:28.577 "state": "configuring", 00:18:28.577 "raid_level": "concat", 00:18:28.577 "superblock": true, 00:18:28.577 "num_base_bdevs": 3, 00:18:28.577 "num_base_bdevs_discovered": 2, 00:18:28.577 "num_base_bdevs_operational": 3, 00:18:28.578 "base_bdevs_list": [ 00:18:28.578 { 00:18:28.578 "name": "BaseBdev1", 00:18:28.578 "uuid": "041b08d6-98c3-4872-866e-7d659c7bc84a", 00:18:28.578 "is_configured": true, 00:18:28.578 "data_offset": 2048, 00:18:28.578 "data_size": 63488 00:18:28.578 }, 00:18:28.578 { 00:18:28.578 "name": "BaseBdev2", 00:18:28.578 "uuid": "d7408731-c1d2-4170-8196-df65c4a8e715", 00:18:28.578 "is_configured": true, 00:18:28.578 "data_offset": 2048, 00:18:28.578 "data_size": 63488 00:18:28.578 }, 00:18:28.578 { 00:18:28.578 "name": "BaseBdev3", 00:18:28.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.578 "is_configured": false, 00:18:28.578 "data_offset": 0, 00:18:28.578 "data_size": 0 00:18:28.578 } 00:18:28.578 ] 00:18:28.578 }' 00:18:28.578 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:28.578 12:00:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.143 12:00:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:29.400 [2024-07-21 12:00:28.224258] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:29.400 [2024-07-21 12:00:28.224546] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:18:29.400 [2024-07-21 12:00:28.224563] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:29.400 [2024-07-21 12:00:28.224740] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:29.400 [2024-07-21 12:00:28.225171] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:18:29.400 [2024-07-21 12:00:28.225198] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:18:29.400 [2024-07-21 12:00:28.225366] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.400 BaseBdev3 00:18:29.400 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:18:29.400 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:29.400 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:29.400 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:29.400 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:29.400 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:29.400 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.658 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:29.917 [ 00:18:29.917 { 00:18:29.917 "name": "BaseBdev3", 00:18:29.917 "aliases": [ 00:18:29.917 "54633cad-ad76-4186-8369-46890d1beb42" 00:18:29.917 ], 00:18:29.917 "product_name": "Malloc disk", 00:18:29.917 "block_size": 512, 00:18:29.917 "num_blocks": 65536, 00:18:29.917 "uuid": "54633cad-ad76-4186-8369-46890d1beb42", 00:18:29.917 "assigned_rate_limits": { 00:18:29.917 "rw_ios_per_sec": 0, 00:18:29.917 "rw_mbytes_per_sec": 0, 00:18:29.917 "r_mbytes_per_sec": 0, 00:18:29.917 "w_mbytes_per_sec": 0 00:18:29.917 }, 00:18:29.917 "claimed": true, 00:18:29.917 "claim_type": "exclusive_write", 00:18:29.917 "zoned": false, 00:18:29.917 "supported_io_types": { 00:18:29.917 "read": true, 00:18:29.917 "write": true, 00:18:29.917 "unmap": true, 00:18:29.917 "write_zeroes": true, 00:18:29.917 "flush": true, 00:18:29.917 "reset": true, 00:18:29.917 "compare": false, 00:18:29.917 "compare_and_write": false, 00:18:29.917 "abort": true, 00:18:29.917 "nvme_admin": false, 00:18:29.917 "nvme_io": false 00:18:29.917 }, 00:18:29.917 "memory_domains": [ 00:18:29.917 { 00:18:29.917 "dma_device_id": "system", 00:18:29.917 "dma_device_type": 1 00:18:29.917 }, 00:18:29.917 { 00:18:29.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.917 "dma_device_type": 2 00:18:29.917 } 00:18:29.917 ], 00:18:29.917 "driver_specific": {} 00:18:29.917 } 00:18:29.917 ] 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.917 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.182 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:30.182 "name": "Existed_Raid", 00:18:30.182 "uuid": "ae6e3146-3ac8-49fe-8fe9-52acbd8b327a", 00:18:30.182 "strip_size_kb": 64, 00:18:30.182 "state": "online", 00:18:30.182 "raid_level": "concat", 00:18:30.182 "superblock": true, 00:18:30.182 "num_base_bdevs": 3, 00:18:30.182 "num_base_bdevs_discovered": 3, 00:18:30.182 "num_base_bdevs_operational": 3, 00:18:30.182 "base_bdevs_list": [ 00:18:30.182 { 00:18:30.182 "name": "BaseBdev1", 00:18:30.182 "uuid": "041b08d6-98c3-4872-866e-7d659c7bc84a", 00:18:30.182 "is_configured": true, 00:18:30.182 "data_offset": 2048, 00:18:30.182 "data_size": 63488 00:18:30.182 }, 00:18:30.182 { 00:18:30.182 "name": "BaseBdev2", 00:18:30.182 "uuid": "d7408731-c1d2-4170-8196-df65c4a8e715", 00:18:30.182 "is_configured": true, 00:18:30.182 "data_offset": 2048, 00:18:30.182 "data_size": 63488 00:18:30.182 }, 00:18:30.182 { 00:18:30.182 "name": "BaseBdev3", 00:18:30.182 "uuid": "54633cad-ad76-4186-8369-46890d1beb42", 00:18:30.182 "is_configured": true, 00:18:30.182 "data_offset": 2048, 00:18:30.182 "data_size": 63488 00:18:30.182 } 00:18:30.182 ] 00:18:30.182 }' 00:18:30.182 12:00:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:30.182 12:00:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:31.122 [2024-07-21 12:00:29.917028] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:31.122 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:31.122 "name": "Existed_Raid", 00:18:31.122 "aliases": [ 00:18:31.122 "ae6e3146-3ac8-49fe-8fe9-52acbd8b327a" 00:18:31.122 ], 00:18:31.122 "product_name": "Raid Volume", 00:18:31.122 "block_size": 512, 00:18:31.122 "num_blocks": 190464, 00:18:31.122 "uuid": "ae6e3146-3ac8-49fe-8fe9-52acbd8b327a", 00:18:31.122 "assigned_rate_limits": { 00:18:31.123 "rw_ios_per_sec": 0, 00:18:31.123 "rw_mbytes_per_sec": 0, 00:18:31.123 "r_mbytes_per_sec": 0, 00:18:31.123 "w_mbytes_per_sec": 0 00:18:31.123 }, 00:18:31.123 "claimed": false, 00:18:31.123 "zoned": false, 00:18:31.123 "supported_io_types": { 00:18:31.123 "read": true, 00:18:31.123 "write": true, 00:18:31.123 "unmap": true, 00:18:31.123 "write_zeroes": true, 00:18:31.123 "flush": true, 00:18:31.123 "reset": true, 00:18:31.123 "compare": false, 00:18:31.123 "compare_and_write": false, 00:18:31.123 "abort": false, 00:18:31.123 "nvme_admin": false, 00:18:31.123 "nvme_io": false 00:18:31.123 }, 00:18:31.123 "memory_domains": [ 00:18:31.123 { 00:18:31.123 "dma_device_id": "system", 00:18:31.123 "dma_device_type": 1 00:18:31.123 }, 00:18:31.123 { 00:18:31.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.123 "dma_device_type": 2 00:18:31.123 }, 00:18:31.123 { 00:18:31.123 "dma_device_id": "system", 00:18:31.123 "dma_device_type": 1 00:18:31.123 }, 00:18:31.123 { 00:18:31.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.123 "dma_device_type": 2 00:18:31.123 }, 00:18:31.123 { 00:18:31.123 "dma_device_id": "system", 00:18:31.123 "dma_device_type": 1 00:18:31.123 }, 00:18:31.123 { 00:18:31.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.123 "dma_device_type": 2 00:18:31.123 } 00:18:31.123 ], 00:18:31.123 "driver_specific": { 00:18:31.123 "raid": { 00:18:31.123 "uuid": "ae6e3146-3ac8-49fe-8fe9-52acbd8b327a", 00:18:31.123 "strip_size_kb": 64, 00:18:31.123 "state": "online", 00:18:31.123 "raid_level": "concat", 00:18:31.123 "superblock": true, 00:18:31.123 "num_base_bdevs": 3, 00:18:31.123 "num_base_bdevs_discovered": 3, 00:18:31.123 "num_base_bdevs_operational": 3, 00:18:31.123 "base_bdevs_list": [ 00:18:31.123 { 00:18:31.123 "name": "BaseBdev1", 00:18:31.123 "uuid": "041b08d6-98c3-4872-866e-7d659c7bc84a", 00:18:31.123 "is_configured": true, 00:18:31.123 "data_offset": 2048, 00:18:31.123 "data_size": 63488 00:18:31.123 }, 00:18:31.123 { 00:18:31.123 "name": "BaseBdev2", 00:18:31.123 "uuid": "d7408731-c1d2-4170-8196-df65c4a8e715", 00:18:31.123 "is_configured": true, 00:18:31.123 "data_offset": 2048, 00:18:31.123 "data_size": 63488 00:18:31.123 }, 00:18:31.123 { 00:18:31.123 "name": "BaseBdev3", 00:18:31.123 "uuid": "54633cad-ad76-4186-8369-46890d1beb42", 00:18:31.123 "is_configured": true, 00:18:31.123 "data_offset": 2048, 00:18:31.123 "data_size": 63488 00:18:31.123 } 00:18:31.123 ] 00:18:31.123 } 00:18:31.123 } 00:18:31.123 }' 00:18:31.123 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:31.381 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:18:31.381 BaseBdev2 00:18:31.381 BaseBdev3' 00:18:31.381 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:31.381 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:18:31.381 12:00:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:31.381 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:31.381 "name": "BaseBdev1", 00:18:31.381 "aliases": [ 00:18:31.381 "041b08d6-98c3-4872-866e-7d659c7bc84a" 00:18:31.381 ], 00:18:31.381 "product_name": "Malloc disk", 00:18:31.381 "block_size": 512, 00:18:31.381 "num_blocks": 65536, 00:18:31.381 "uuid": "041b08d6-98c3-4872-866e-7d659c7bc84a", 00:18:31.381 "assigned_rate_limits": { 00:18:31.381 "rw_ios_per_sec": 0, 00:18:31.381 "rw_mbytes_per_sec": 0, 00:18:31.381 "r_mbytes_per_sec": 0, 00:18:31.381 "w_mbytes_per_sec": 0 00:18:31.381 }, 00:18:31.381 "claimed": true, 00:18:31.381 "claim_type": "exclusive_write", 00:18:31.381 "zoned": false, 00:18:31.381 "supported_io_types": { 00:18:31.381 "read": true, 00:18:31.381 "write": true, 00:18:31.381 "unmap": true, 00:18:31.381 "write_zeroes": true, 00:18:31.381 "flush": true, 00:18:31.381 "reset": true, 00:18:31.381 "compare": false, 00:18:31.381 "compare_and_write": false, 00:18:31.381 "abort": true, 00:18:31.381 "nvme_admin": false, 00:18:31.381 "nvme_io": false 00:18:31.381 }, 00:18:31.381 "memory_domains": [ 00:18:31.381 { 00:18:31.381 "dma_device_id": "system", 00:18:31.381 "dma_device_type": 1 00:18:31.381 }, 00:18:31.381 { 00:18:31.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:31.381 "dma_device_type": 2 00:18:31.381 } 00:18:31.381 ], 00:18:31.381 "driver_specific": {} 00:18:31.381 }' 00:18:31.381 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.639 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:31.639 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:31.639 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.639 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:31.639 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:31.639 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.639 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:31.896 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:31.896 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.896 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:31.896 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:31.896 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:31.896 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:31.896 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.153 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.154 "name": "BaseBdev2", 00:18:32.154 "aliases": [ 00:18:32.154 "d7408731-c1d2-4170-8196-df65c4a8e715" 00:18:32.154 ], 00:18:32.154 "product_name": "Malloc disk", 00:18:32.154 "block_size": 512, 00:18:32.154 "num_blocks": 65536, 00:18:32.154 "uuid": "d7408731-c1d2-4170-8196-df65c4a8e715", 00:18:32.154 "assigned_rate_limits": { 00:18:32.154 "rw_ios_per_sec": 0, 00:18:32.154 "rw_mbytes_per_sec": 0, 00:18:32.154 "r_mbytes_per_sec": 0, 00:18:32.154 "w_mbytes_per_sec": 0 00:18:32.154 }, 00:18:32.154 "claimed": true, 00:18:32.154 "claim_type": "exclusive_write", 00:18:32.154 "zoned": false, 00:18:32.154 "supported_io_types": { 00:18:32.154 "read": true, 00:18:32.154 "write": true, 00:18:32.154 "unmap": true, 00:18:32.154 "write_zeroes": true, 00:18:32.154 "flush": true, 00:18:32.154 "reset": true, 00:18:32.154 "compare": false, 00:18:32.154 "compare_and_write": false, 00:18:32.154 "abort": true, 00:18:32.154 "nvme_admin": false, 00:18:32.154 "nvme_io": false 00:18:32.154 }, 00:18:32.154 "memory_domains": [ 00:18:32.154 { 00:18:32.154 "dma_device_id": "system", 00:18:32.154 "dma_device_type": 1 00:18:32.154 }, 00:18:32.154 { 00:18:32.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.154 "dma_device_type": 2 00:18:32.154 } 00:18:32.154 ], 00:18:32.154 "driver_specific": {} 00:18:32.154 }' 00:18:32.154 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.154 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.154 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.154 12:00:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.412 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.412 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.412 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.412 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:32.412 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:32.412 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.412 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:32.670 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:32.670 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:32.670 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:32.670 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:32.927 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:32.927 "name": "BaseBdev3", 00:18:32.927 "aliases": [ 00:18:32.927 "54633cad-ad76-4186-8369-46890d1beb42" 00:18:32.927 ], 00:18:32.927 "product_name": "Malloc disk", 00:18:32.927 "block_size": 512, 00:18:32.927 "num_blocks": 65536, 00:18:32.927 "uuid": "54633cad-ad76-4186-8369-46890d1beb42", 00:18:32.927 "assigned_rate_limits": { 00:18:32.927 "rw_ios_per_sec": 0, 00:18:32.927 "rw_mbytes_per_sec": 0, 00:18:32.927 "r_mbytes_per_sec": 0, 00:18:32.927 "w_mbytes_per_sec": 0 00:18:32.927 }, 00:18:32.927 "claimed": true, 00:18:32.927 "claim_type": "exclusive_write", 00:18:32.927 "zoned": false, 00:18:32.927 "supported_io_types": { 00:18:32.927 "read": true, 00:18:32.927 "write": true, 00:18:32.927 "unmap": true, 00:18:32.927 "write_zeroes": true, 00:18:32.927 "flush": true, 00:18:32.927 "reset": true, 00:18:32.927 "compare": false, 00:18:32.927 "compare_and_write": false, 00:18:32.927 "abort": true, 00:18:32.927 "nvme_admin": false, 00:18:32.927 "nvme_io": false 00:18:32.927 }, 00:18:32.927 "memory_domains": [ 00:18:32.927 { 00:18:32.927 "dma_device_id": "system", 00:18:32.927 "dma_device_type": 1 00:18:32.927 }, 00:18:32.927 { 00:18:32.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.927 "dma_device_type": 2 00:18:32.927 } 00:18:32.927 ], 00:18:32.927 "driver_specific": {} 00:18:32.927 }' 00:18:32.927 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.927 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:32.928 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:32.928 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.928 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:32.928 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:32.928 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.186 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:33.186 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:33.186 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.186 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:33.186 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:33.186 12:00:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:33.444 [2024-07-21 12:00:32.247388] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:33.444 [2024-07-21 12:00:32.247438] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.444 [2024-07-21 12:00:32.247563] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.444 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.702 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:33.702 "name": "Existed_Raid", 00:18:33.702 "uuid": "ae6e3146-3ac8-49fe-8fe9-52acbd8b327a", 00:18:33.702 "strip_size_kb": 64, 00:18:33.702 "state": "offline", 00:18:33.702 "raid_level": "concat", 00:18:33.702 "superblock": true, 00:18:33.702 "num_base_bdevs": 3, 00:18:33.702 "num_base_bdevs_discovered": 2, 00:18:33.702 "num_base_bdevs_operational": 2, 00:18:33.702 "base_bdevs_list": [ 00:18:33.702 { 00:18:33.702 "name": null, 00:18:33.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.702 "is_configured": false, 00:18:33.702 "data_offset": 2048, 00:18:33.702 "data_size": 63488 00:18:33.702 }, 00:18:33.702 { 00:18:33.702 "name": "BaseBdev2", 00:18:33.702 "uuid": "d7408731-c1d2-4170-8196-df65c4a8e715", 00:18:33.702 "is_configured": true, 00:18:33.702 "data_offset": 2048, 00:18:33.702 "data_size": 63488 00:18:33.702 }, 00:18:33.702 { 00:18:33.702 "name": "BaseBdev3", 00:18:33.702 "uuid": "54633cad-ad76-4186-8369-46890d1beb42", 00:18:33.702 "is_configured": true, 00:18:33.702 "data_offset": 2048, 00:18:33.702 "data_size": 63488 00:18:33.702 } 00:18:33.702 ] 00:18:33.702 }' 00:18:33.702 12:00:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:33.702 12:00:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.637 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:18:34.637 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.637 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.637 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:34.637 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:34.637 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:34.637 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:34.894 [2024-07-21 12:00:33.725406] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:34.894 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:34.894 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:34.894 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.894 12:00:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:18:35.459 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:18:35.459 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:35.459 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:35.459 [2024-07-21 12:00:34.297382] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:35.459 [2024-07-21 12:00:34.297465] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:18:35.717 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:18:35.718 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:18:35.718 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.718 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:18:35.976 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:18:35.976 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:18:35.976 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:18:35.976 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:18:35.976 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:35.976 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:36.234 BaseBdev2 00:18:36.234 12:00:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:18:36.234 12:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:18:36.234 12:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:36.234 12:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:36.234 12:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:36.234 12:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:36.234 12:00:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:36.234 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:36.492 [ 00:18:36.492 { 00:18:36.492 "name": "BaseBdev2", 00:18:36.492 "aliases": [ 00:18:36.492 "a6760ff6-0eb4-4f0d-9f86-62757b2e6697" 00:18:36.492 ], 00:18:36.492 "product_name": "Malloc disk", 00:18:36.492 "block_size": 512, 00:18:36.492 "num_blocks": 65536, 00:18:36.492 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:36.492 "assigned_rate_limits": { 00:18:36.492 "rw_ios_per_sec": 0, 00:18:36.492 "rw_mbytes_per_sec": 0, 00:18:36.492 "r_mbytes_per_sec": 0, 00:18:36.492 "w_mbytes_per_sec": 0 00:18:36.492 }, 00:18:36.492 "claimed": false, 00:18:36.492 "zoned": false, 00:18:36.492 "supported_io_types": { 00:18:36.492 "read": true, 00:18:36.492 "write": true, 00:18:36.492 "unmap": true, 00:18:36.492 "write_zeroes": true, 00:18:36.492 "flush": true, 00:18:36.492 "reset": true, 00:18:36.492 "compare": false, 00:18:36.492 "compare_and_write": false, 00:18:36.492 "abort": true, 00:18:36.492 "nvme_admin": false, 00:18:36.492 "nvme_io": false 00:18:36.492 }, 00:18:36.492 "memory_domains": [ 00:18:36.492 { 00:18:36.492 "dma_device_id": "system", 00:18:36.492 "dma_device_type": 1 00:18:36.492 }, 00:18:36.492 { 00:18:36.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:36.492 "dma_device_type": 2 00:18:36.492 } 00:18:36.492 ], 00:18:36.492 "driver_specific": {} 00:18:36.492 } 00:18:36.492 ] 00:18:36.492 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:36.492 12:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:36.492 12:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:36.492 12:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:36.750 BaseBdev3 00:18:36.750 12:00:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:18:36.750 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:18:36.750 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:36.750 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:36.750 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:36.750 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:36.750 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:37.008 12:00:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:37.267 [ 00:18:37.267 { 00:18:37.267 "name": "BaseBdev3", 00:18:37.267 "aliases": [ 00:18:37.267 "af1a609f-5108-4175-829d-856de6955c4e" 00:18:37.267 ], 00:18:37.267 "product_name": "Malloc disk", 00:18:37.267 "block_size": 512, 00:18:37.267 "num_blocks": 65536, 00:18:37.267 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:37.267 "assigned_rate_limits": { 00:18:37.267 "rw_ios_per_sec": 0, 00:18:37.267 "rw_mbytes_per_sec": 0, 00:18:37.267 "r_mbytes_per_sec": 0, 00:18:37.267 "w_mbytes_per_sec": 0 00:18:37.267 }, 00:18:37.267 "claimed": false, 00:18:37.267 "zoned": false, 00:18:37.267 "supported_io_types": { 00:18:37.267 "read": true, 00:18:37.267 "write": true, 00:18:37.267 "unmap": true, 00:18:37.267 "write_zeroes": true, 00:18:37.267 "flush": true, 00:18:37.267 "reset": true, 00:18:37.267 "compare": false, 00:18:37.267 "compare_and_write": false, 00:18:37.267 "abort": true, 00:18:37.267 "nvme_admin": false, 00:18:37.267 "nvme_io": false 00:18:37.267 }, 00:18:37.267 "memory_domains": [ 00:18:37.267 { 00:18:37.267 "dma_device_id": "system", 00:18:37.267 "dma_device_type": 1 00:18:37.267 }, 00:18:37.267 { 00:18:37.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:37.267 "dma_device_type": 2 00:18:37.267 } 00:18:37.267 ], 00:18:37.267 "driver_specific": {} 00:18:37.267 } 00:18:37.267 ] 00:18:37.267 12:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:37.267 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:18:37.267 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:18:37.267 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:18:37.525 [2024-07-21 12:00:36.366916] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:37.525 [2024-07-21 12:00:36.367064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:37.525 [2024-07-21 12:00:36.367142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:37.525 [2024-07-21 12:00:36.369346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:37.525 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:37.525 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:37.525 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:37.525 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:37.525 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:37.525 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:37.783 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:37.783 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:37.783 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:37.783 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:37.783 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:37.783 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.041 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:38.041 "name": "Existed_Raid", 00:18:38.041 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:38.041 "strip_size_kb": 64, 00:18:38.041 "state": "configuring", 00:18:38.041 "raid_level": "concat", 00:18:38.041 "superblock": true, 00:18:38.041 "num_base_bdevs": 3, 00:18:38.041 "num_base_bdevs_discovered": 2, 00:18:38.041 "num_base_bdevs_operational": 3, 00:18:38.041 "base_bdevs_list": [ 00:18:38.041 { 00:18:38.041 "name": "BaseBdev1", 00:18:38.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:38.041 "is_configured": false, 00:18:38.041 "data_offset": 0, 00:18:38.041 "data_size": 0 00:18:38.041 }, 00:18:38.041 { 00:18:38.041 "name": "BaseBdev2", 00:18:38.041 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:38.042 "is_configured": true, 00:18:38.042 "data_offset": 2048, 00:18:38.042 "data_size": 63488 00:18:38.042 }, 00:18:38.042 { 00:18:38.042 "name": "BaseBdev3", 00:18:38.042 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:38.042 "is_configured": true, 00:18:38.042 "data_offset": 2048, 00:18:38.042 "data_size": 63488 00:18:38.042 } 00:18:38.042 ] 00:18:38.042 }' 00:18:38.042 12:00:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:38.042 12:00:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:38.608 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:18:38.866 [2024-07-21 12:00:37.503894] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.866 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:39.125 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:39.125 "name": "Existed_Raid", 00:18:39.125 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:39.125 "strip_size_kb": 64, 00:18:39.125 "state": "configuring", 00:18:39.125 "raid_level": "concat", 00:18:39.125 "superblock": true, 00:18:39.125 "num_base_bdevs": 3, 00:18:39.125 "num_base_bdevs_discovered": 1, 00:18:39.125 "num_base_bdevs_operational": 3, 00:18:39.125 "base_bdevs_list": [ 00:18:39.125 { 00:18:39.125 "name": "BaseBdev1", 00:18:39.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.125 "is_configured": false, 00:18:39.125 "data_offset": 0, 00:18:39.125 "data_size": 0 00:18:39.125 }, 00:18:39.125 { 00:18:39.125 "name": null, 00:18:39.125 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:39.125 "is_configured": false, 00:18:39.125 "data_offset": 2048, 00:18:39.125 "data_size": 63488 00:18:39.125 }, 00:18:39.125 { 00:18:39.125 "name": "BaseBdev3", 00:18:39.125 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:39.125 "is_configured": true, 00:18:39.125 "data_offset": 2048, 00:18:39.125 "data_size": 63488 00:18:39.125 } 00:18:39.125 ] 00:18:39.125 }' 00:18:39.125 12:00:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:39.125 12:00:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:39.690 12:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:39.690 12:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:39.947 12:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:18:39.947 12:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:40.205 [2024-07-21 12:00:38.977302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.205 BaseBdev1 00:18:40.205 12:00:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:18:40.205 12:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:18:40.205 12:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:40.205 12:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:40.205 12:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:40.205 12:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:40.205 12:00:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:40.463 12:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.734 [ 00:18:40.734 { 00:18:40.734 "name": "BaseBdev1", 00:18:40.734 "aliases": [ 00:18:40.734 "6779b522-31fa-48c4-b191-c21426117a76" 00:18:40.734 ], 00:18:40.734 "product_name": "Malloc disk", 00:18:40.734 "block_size": 512, 00:18:40.734 "num_blocks": 65536, 00:18:40.734 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:40.734 "assigned_rate_limits": { 00:18:40.734 "rw_ios_per_sec": 0, 00:18:40.734 "rw_mbytes_per_sec": 0, 00:18:40.734 "r_mbytes_per_sec": 0, 00:18:40.734 "w_mbytes_per_sec": 0 00:18:40.734 }, 00:18:40.734 "claimed": true, 00:18:40.734 "claim_type": "exclusive_write", 00:18:40.734 "zoned": false, 00:18:40.734 "supported_io_types": { 00:18:40.734 "read": true, 00:18:40.734 "write": true, 00:18:40.734 "unmap": true, 00:18:40.734 "write_zeroes": true, 00:18:40.734 "flush": true, 00:18:40.734 "reset": true, 00:18:40.734 "compare": false, 00:18:40.734 "compare_and_write": false, 00:18:40.734 "abort": true, 00:18:40.734 "nvme_admin": false, 00:18:40.734 "nvme_io": false 00:18:40.734 }, 00:18:40.734 "memory_domains": [ 00:18:40.734 { 00:18:40.734 "dma_device_id": "system", 00:18:40.734 "dma_device_type": 1 00:18:40.734 }, 00:18:40.734 { 00:18:40.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.734 "dma_device_type": 2 00:18:40.734 } 00:18:40.734 ], 00:18:40.734 "driver_specific": {} 00:18:40.734 } 00:18:40.734 ] 00:18:40.734 12:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:40.734 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:40.734 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:40.734 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.735 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.014 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:41.014 "name": "Existed_Raid", 00:18:41.014 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:41.014 "strip_size_kb": 64, 00:18:41.014 "state": "configuring", 00:18:41.014 "raid_level": "concat", 00:18:41.014 "superblock": true, 00:18:41.014 "num_base_bdevs": 3, 00:18:41.014 "num_base_bdevs_discovered": 2, 00:18:41.014 "num_base_bdevs_operational": 3, 00:18:41.014 "base_bdevs_list": [ 00:18:41.014 { 00:18:41.014 "name": "BaseBdev1", 00:18:41.014 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:41.014 "is_configured": true, 00:18:41.014 "data_offset": 2048, 00:18:41.014 "data_size": 63488 00:18:41.014 }, 00:18:41.014 { 00:18:41.014 "name": null, 00:18:41.014 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:41.014 "is_configured": false, 00:18:41.014 "data_offset": 2048, 00:18:41.014 "data_size": 63488 00:18:41.014 }, 00:18:41.014 { 00:18:41.014 "name": "BaseBdev3", 00:18:41.014 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:41.014 "is_configured": true, 00:18:41.014 "data_offset": 2048, 00:18:41.014 "data_size": 63488 00:18:41.014 } 00:18:41.014 ] 00:18:41.014 }' 00:18:41.014 12:00:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:41.014 12:00:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:41.587 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:41.587 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:41.845 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:18:41.845 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:18:42.103 [2024-07-21 12:00:40.941898] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.103 12:00:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.670 12:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:42.670 "name": "Existed_Raid", 00:18:42.670 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:42.670 "strip_size_kb": 64, 00:18:42.670 "state": "configuring", 00:18:42.670 "raid_level": "concat", 00:18:42.670 "superblock": true, 00:18:42.670 "num_base_bdevs": 3, 00:18:42.670 "num_base_bdevs_discovered": 1, 00:18:42.670 "num_base_bdevs_operational": 3, 00:18:42.670 "base_bdevs_list": [ 00:18:42.670 { 00:18:42.670 "name": "BaseBdev1", 00:18:42.670 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:42.670 "is_configured": true, 00:18:42.670 "data_offset": 2048, 00:18:42.670 "data_size": 63488 00:18:42.670 }, 00:18:42.670 { 00:18:42.670 "name": null, 00:18:42.670 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:42.670 "is_configured": false, 00:18:42.670 "data_offset": 2048, 00:18:42.670 "data_size": 63488 00:18:42.670 }, 00:18:42.670 { 00:18:42.670 "name": null, 00:18:42.670 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:42.670 "is_configured": false, 00:18:42.670 "data_offset": 2048, 00:18:42.670 "data_size": 63488 00:18:42.670 } 00:18:42.670 ] 00:18:42.670 }' 00:18:42.670 12:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:42.670 12:00:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:43.236 12:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:43.236 12:00:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.495 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:18:43.495 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:43.753 [2024-07-21 12:00:42.366191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:43.753 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.012 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:44.012 "name": "Existed_Raid", 00:18:44.012 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:44.012 "strip_size_kb": 64, 00:18:44.012 "state": "configuring", 00:18:44.012 "raid_level": "concat", 00:18:44.012 "superblock": true, 00:18:44.012 "num_base_bdevs": 3, 00:18:44.012 "num_base_bdevs_discovered": 2, 00:18:44.012 "num_base_bdevs_operational": 3, 00:18:44.012 "base_bdevs_list": [ 00:18:44.012 { 00:18:44.012 "name": "BaseBdev1", 00:18:44.012 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:44.012 "is_configured": true, 00:18:44.012 "data_offset": 2048, 00:18:44.012 "data_size": 63488 00:18:44.012 }, 00:18:44.012 { 00:18:44.012 "name": null, 00:18:44.012 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:44.012 "is_configured": false, 00:18:44.012 "data_offset": 2048, 00:18:44.012 "data_size": 63488 00:18:44.012 }, 00:18:44.012 { 00:18:44.012 "name": "BaseBdev3", 00:18:44.012 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:44.012 "is_configured": true, 00:18:44.012 "data_offset": 2048, 00:18:44.012 "data_size": 63488 00:18:44.012 } 00:18:44.012 ] 00:18:44.012 }' 00:18:44.012 12:00:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:44.012 12:00:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:44.576 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.576 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:44.834 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:18:44.834 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:45.092 [2024-07-21 12:00:43.786781] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:45.092 12:00:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.350 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:45.350 "name": "Existed_Raid", 00:18:45.350 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:45.350 "strip_size_kb": 64, 00:18:45.350 "state": "configuring", 00:18:45.350 "raid_level": "concat", 00:18:45.350 "superblock": true, 00:18:45.350 "num_base_bdevs": 3, 00:18:45.350 "num_base_bdevs_discovered": 1, 00:18:45.350 "num_base_bdevs_operational": 3, 00:18:45.350 "base_bdevs_list": [ 00:18:45.350 { 00:18:45.350 "name": null, 00:18:45.350 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:45.350 "is_configured": false, 00:18:45.350 "data_offset": 2048, 00:18:45.350 "data_size": 63488 00:18:45.350 }, 00:18:45.350 { 00:18:45.350 "name": null, 00:18:45.350 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:45.350 "is_configured": false, 00:18:45.350 "data_offset": 2048, 00:18:45.350 "data_size": 63488 00:18:45.350 }, 00:18:45.350 { 00:18:45.350 "name": "BaseBdev3", 00:18:45.350 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:45.350 "is_configured": true, 00:18:45.350 "data_offset": 2048, 00:18:45.350 "data_size": 63488 00:18:45.350 } 00:18:45.350 ] 00:18:45.350 }' 00:18:45.350 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:45.350 12:00:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:45.916 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:45.916 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.175 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:18:46.175 12:00:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:46.434 [2024-07-21 12:00:45.161628] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:46.434 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.692 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:46.692 "name": "Existed_Raid", 00:18:46.692 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:46.692 "strip_size_kb": 64, 00:18:46.692 "state": "configuring", 00:18:46.692 "raid_level": "concat", 00:18:46.692 "superblock": true, 00:18:46.692 "num_base_bdevs": 3, 00:18:46.692 "num_base_bdevs_discovered": 2, 00:18:46.692 "num_base_bdevs_operational": 3, 00:18:46.692 "base_bdevs_list": [ 00:18:46.692 { 00:18:46.692 "name": null, 00:18:46.692 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:46.692 "is_configured": false, 00:18:46.692 "data_offset": 2048, 00:18:46.692 "data_size": 63488 00:18:46.692 }, 00:18:46.692 { 00:18:46.692 "name": "BaseBdev2", 00:18:46.692 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:46.692 "is_configured": true, 00:18:46.692 "data_offset": 2048, 00:18:46.692 "data_size": 63488 00:18:46.692 }, 00:18:46.692 { 00:18:46.692 "name": "BaseBdev3", 00:18:46.692 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:46.692 "is_configured": true, 00:18:46.692 "data_offset": 2048, 00:18:46.692 "data_size": 63488 00:18:46.692 } 00:18:46.692 ] 00:18:46.692 }' 00:18:46.692 12:00:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:46.692 12:00:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:47.258 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.258 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:47.822 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:18:47.822 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.822 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:47.822 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 6779b522-31fa-48c4-b191-c21426117a76 00:18:48.080 [2024-07-21 12:00:46.899335] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:48.080 [2024-07-21 12:00:46.899584] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:48.080 [2024-07-21 12:00:46.899601] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:48.080 [2024-07-21 12:00:46.899703] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:18:48.080 [2024-07-21 12:00:46.900090] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:48.080 [2024-07-21 12:00:46.900118] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:18:48.080 [2024-07-21 12:00:46.900230] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.080 NewBaseBdev 00:18:48.080 12:00:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:18:48.080 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:18:48.080 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:18:48.080 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:18:48.080 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:18:48.080 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:18:48.080 12:00:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:48.337 12:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:48.595 [ 00:18:48.595 { 00:18:48.595 "name": "NewBaseBdev", 00:18:48.595 "aliases": [ 00:18:48.595 "6779b522-31fa-48c4-b191-c21426117a76" 00:18:48.595 ], 00:18:48.595 "product_name": "Malloc disk", 00:18:48.595 "block_size": 512, 00:18:48.595 "num_blocks": 65536, 00:18:48.595 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:48.595 "assigned_rate_limits": { 00:18:48.595 "rw_ios_per_sec": 0, 00:18:48.595 "rw_mbytes_per_sec": 0, 00:18:48.595 "r_mbytes_per_sec": 0, 00:18:48.595 "w_mbytes_per_sec": 0 00:18:48.595 }, 00:18:48.595 "claimed": true, 00:18:48.595 "claim_type": "exclusive_write", 00:18:48.595 "zoned": false, 00:18:48.595 "supported_io_types": { 00:18:48.595 "read": true, 00:18:48.595 "write": true, 00:18:48.595 "unmap": true, 00:18:48.595 "write_zeroes": true, 00:18:48.595 "flush": true, 00:18:48.595 "reset": true, 00:18:48.595 "compare": false, 00:18:48.595 "compare_and_write": false, 00:18:48.595 "abort": true, 00:18:48.595 "nvme_admin": false, 00:18:48.595 "nvme_io": false 00:18:48.595 }, 00:18:48.595 "memory_domains": [ 00:18:48.595 { 00:18:48.595 "dma_device_id": "system", 00:18:48.595 "dma_device_type": 1 00:18:48.595 }, 00:18:48.595 { 00:18:48.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:48.595 "dma_device_type": 2 00:18:48.595 } 00:18:48.595 ], 00:18:48.595 "driver_specific": {} 00:18:48.595 } 00:18:48.595 ] 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:48.595 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:48.596 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.596 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:48.864 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:48.864 "name": "Existed_Raid", 00:18:48.864 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:48.864 "strip_size_kb": 64, 00:18:48.864 "state": "online", 00:18:48.864 "raid_level": "concat", 00:18:48.864 "superblock": true, 00:18:48.864 "num_base_bdevs": 3, 00:18:48.864 "num_base_bdevs_discovered": 3, 00:18:48.864 "num_base_bdevs_operational": 3, 00:18:48.864 "base_bdevs_list": [ 00:18:48.864 { 00:18:48.864 "name": "NewBaseBdev", 00:18:48.864 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:48.864 "is_configured": true, 00:18:48.864 "data_offset": 2048, 00:18:48.864 "data_size": 63488 00:18:48.864 }, 00:18:48.864 { 00:18:48.864 "name": "BaseBdev2", 00:18:48.864 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:48.864 "is_configured": true, 00:18:48.864 "data_offset": 2048, 00:18:48.864 "data_size": 63488 00:18:48.864 }, 00:18:48.864 { 00:18:48.864 "name": "BaseBdev3", 00:18:48.864 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:48.864 "is_configured": true, 00:18:48.864 "data_offset": 2048, 00:18:48.864 "data_size": 63488 00:18:48.864 } 00:18:48.864 ] 00:18:48.864 }' 00:18:48.864 12:00:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:48.864 12:00:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:18:49.432 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:49.690 [2024-07-21 12:00:48.519582] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.690 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:49.690 "name": "Existed_Raid", 00:18:49.690 "aliases": [ 00:18:49.690 "63907ba3-2ecb-41a0-b885-c0b46bf5daff" 00:18:49.690 ], 00:18:49.690 "product_name": "Raid Volume", 00:18:49.690 "block_size": 512, 00:18:49.690 "num_blocks": 190464, 00:18:49.691 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:49.691 "assigned_rate_limits": { 00:18:49.691 "rw_ios_per_sec": 0, 00:18:49.691 "rw_mbytes_per_sec": 0, 00:18:49.691 "r_mbytes_per_sec": 0, 00:18:49.691 "w_mbytes_per_sec": 0 00:18:49.691 }, 00:18:49.691 "claimed": false, 00:18:49.691 "zoned": false, 00:18:49.691 "supported_io_types": { 00:18:49.691 "read": true, 00:18:49.691 "write": true, 00:18:49.691 "unmap": true, 00:18:49.691 "write_zeroes": true, 00:18:49.691 "flush": true, 00:18:49.691 "reset": true, 00:18:49.691 "compare": false, 00:18:49.691 "compare_and_write": false, 00:18:49.691 "abort": false, 00:18:49.691 "nvme_admin": false, 00:18:49.691 "nvme_io": false 00:18:49.691 }, 00:18:49.691 "memory_domains": [ 00:18:49.691 { 00:18:49.691 "dma_device_id": "system", 00:18:49.691 "dma_device_type": 1 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.691 "dma_device_type": 2 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "dma_device_id": "system", 00:18:49.691 "dma_device_type": 1 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.691 "dma_device_type": 2 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "dma_device_id": "system", 00:18:49.691 "dma_device_type": 1 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.691 "dma_device_type": 2 00:18:49.691 } 00:18:49.691 ], 00:18:49.691 "driver_specific": { 00:18:49.691 "raid": { 00:18:49.691 "uuid": "63907ba3-2ecb-41a0-b885-c0b46bf5daff", 00:18:49.691 "strip_size_kb": 64, 00:18:49.691 "state": "online", 00:18:49.691 "raid_level": "concat", 00:18:49.691 "superblock": true, 00:18:49.691 "num_base_bdevs": 3, 00:18:49.691 "num_base_bdevs_discovered": 3, 00:18:49.691 "num_base_bdevs_operational": 3, 00:18:49.691 "base_bdevs_list": [ 00:18:49.691 { 00:18:49.691 "name": "NewBaseBdev", 00:18:49.691 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:49.691 "is_configured": true, 00:18:49.691 "data_offset": 2048, 00:18:49.691 "data_size": 63488 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "name": "BaseBdev2", 00:18:49.691 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:49.691 "is_configured": true, 00:18:49.691 "data_offset": 2048, 00:18:49.691 "data_size": 63488 00:18:49.691 }, 00:18:49.691 { 00:18:49.691 "name": "BaseBdev3", 00:18:49.691 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:49.691 "is_configured": true, 00:18:49.691 "data_offset": 2048, 00:18:49.691 "data_size": 63488 00:18:49.691 } 00:18:49.691 ] 00:18:49.691 } 00:18:49.691 } 00:18:49.691 }' 00:18:49.691 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:49.949 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:18:49.949 BaseBdev2 00:18:49.949 BaseBdev3' 00:18:49.949 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:49.949 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:18:49.949 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:50.208 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:50.208 "name": "NewBaseBdev", 00:18:50.208 "aliases": [ 00:18:50.208 "6779b522-31fa-48c4-b191-c21426117a76" 00:18:50.208 ], 00:18:50.208 "product_name": "Malloc disk", 00:18:50.208 "block_size": 512, 00:18:50.208 "num_blocks": 65536, 00:18:50.208 "uuid": "6779b522-31fa-48c4-b191-c21426117a76", 00:18:50.208 "assigned_rate_limits": { 00:18:50.208 "rw_ios_per_sec": 0, 00:18:50.208 "rw_mbytes_per_sec": 0, 00:18:50.208 "r_mbytes_per_sec": 0, 00:18:50.208 "w_mbytes_per_sec": 0 00:18:50.208 }, 00:18:50.208 "claimed": true, 00:18:50.208 "claim_type": "exclusive_write", 00:18:50.208 "zoned": false, 00:18:50.208 "supported_io_types": { 00:18:50.208 "read": true, 00:18:50.208 "write": true, 00:18:50.208 "unmap": true, 00:18:50.208 "write_zeroes": true, 00:18:50.208 "flush": true, 00:18:50.208 "reset": true, 00:18:50.208 "compare": false, 00:18:50.208 "compare_and_write": false, 00:18:50.208 "abort": true, 00:18:50.208 "nvme_admin": false, 00:18:50.208 "nvme_io": false 00:18:50.208 }, 00:18:50.208 "memory_domains": [ 00:18:50.208 { 00:18:50.208 "dma_device_id": "system", 00:18:50.208 "dma_device_type": 1 00:18:50.208 }, 00:18:50.208 { 00:18:50.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.208 "dma_device_type": 2 00:18:50.208 } 00:18:50.208 ], 00:18:50.208 "driver_specific": {} 00:18:50.208 }' 00:18:50.208 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.208 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.208 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:50.208 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.208 12:00:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.208 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:50.208 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:18:50.467 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:50.724 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:50.724 "name": "BaseBdev2", 00:18:50.724 "aliases": [ 00:18:50.724 "a6760ff6-0eb4-4f0d-9f86-62757b2e6697" 00:18:50.724 ], 00:18:50.724 "product_name": "Malloc disk", 00:18:50.724 "block_size": 512, 00:18:50.724 "num_blocks": 65536, 00:18:50.724 "uuid": "a6760ff6-0eb4-4f0d-9f86-62757b2e6697", 00:18:50.724 "assigned_rate_limits": { 00:18:50.724 "rw_ios_per_sec": 0, 00:18:50.724 "rw_mbytes_per_sec": 0, 00:18:50.724 "r_mbytes_per_sec": 0, 00:18:50.724 "w_mbytes_per_sec": 0 00:18:50.724 }, 00:18:50.724 "claimed": true, 00:18:50.724 "claim_type": "exclusive_write", 00:18:50.724 "zoned": false, 00:18:50.724 "supported_io_types": { 00:18:50.724 "read": true, 00:18:50.724 "write": true, 00:18:50.724 "unmap": true, 00:18:50.724 "write_zeroes": true, 00:18:50.724 "flush": true, 00:18:50.724 "reset": true, 00:18:50.724 "compare": false, 00:18:50.724 "compare_and_write": false, 00:18:50.724 "abort": true, 00:18:50.724 "nvme_admin": false, 00:18:50.724 "nvme_io": false 00:18:50.724 }, 00:18:50.724 "memory_domains": [ 00:18:50.724 { 00:18:50.724 "dma_device_id": "system", 00:18:50.724 "dma_device_type": 1 00:18:50.724 }, 00:18:50.724 { 00:18:50.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:50.724 "dma_device_type": 2 00:18:50.724 } 00:18:50.724 ], 00:18:50.724 "driver_specific": {} 00:18:50.724 }' 00:18:50.724 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.981 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:50.981 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:50.981 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.981 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:50.981 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:50.981 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.982 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:50.982 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.239 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.239 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.239 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.239 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:51.239 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:18:51.239 12:00:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:51.497 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:51.498 "name": "BaseBdev3", 00:18:51.498 "aliases": [ 00:18:51.498 "af1a609f-5108-4175-829d-856de6955c4e" 00:18:51.498 ], 00:18:51.498 "product_name": "Malloc disk", 00:18:51.498 "block_size": 512, 00:18:51.498 "num_blocks": 65536, 00:18:51.498 "uuid": "af1a609f-5108-4175-829d-856de6955c4e", 00:18:51.498 "assigned_rate_limits": { 00:18:51.498 "rw_ios_per_sec": 0, 00:18:51.498 "rw_mbytes_per_sec": 0, 00:18:51.498 "r_mbytes_per_sec": 0, 00:18:51.498 "w_mbytes_per_sec": 0 00:18:51.498 }, 00:18:51.498 "claimed": true, 00:18:51.498 "claim_type": "exclusive_write", 00:18:51.498 "zoned": false, 00:18:51.498 "supported_io_types": { 00:18:51.498 "read": true, 00:18:51.498 "write": true, 00:18:51.498 "unmap": true, 00:18:51.498 "write_zeroes": true, 00:18:51.498 "flush": true, 00:18:51.498 "reset": true, 00:18:51.498 "compare": false, 00:18:51.498 "compare_and_write": false, 00:18:51.498 "abort": true, 00:18:51.498 "nvme_admin": false, 00:18:51.498 "nvme_io": false 00:18:51.498 }, 00:18:51.498 "memory_domains": [ 00:18:51.498 { 00:18:51.498 "dma_device_id": "system", 00:18:51.498 "dma_device_type": 1 00:18:51.498 }, 00:18:51.498 { 00:18:51.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:51.498 "dma_device_type": 2 00:18:51.498 } 00:18:51.498 ], 00:18:51.498 "driver_specific": {} 00:18:51.498 }' 00:18:51.498 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.498 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:51.498 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:51.498 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.498 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:51.755 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:52.013 [2024-07-21 12:00:50.847897] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:52.013 [2024-07-21 12:00:50.847946] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.013 [2024-07-21 12:00:50.848071] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.013 [2024-07-21 12:00:50.848142] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.013 [2024-07-21 12:00:50.848155] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:18:52.013 12:00:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 139649 00:18:52.013 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 139649 ']' 00:18:52.013 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 139649 00:18:52.013 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:18:52.013 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:52.013 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 139649 00:18:52.270 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:52.270 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:52.270 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 139649' 00:18:52.270 killing process with pid 139649 00:18:52.271 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 139649 00:18:52.271 [2024-07-21 12:00:50.891417] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:52.271 12:00:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 139649 00:18:52.271 [2024-07-21 12:00:50.920222] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:52.529 12:00:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:18:52.529 00:18:52.529 real 0m30.478s 00:18:52.529 user 0m57.998s 00:18:52.529 sys 0m3.591s 00:18:52.529 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:52.529 12:00:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:52.529 ************************************ 00:18:52.529 END TEST raid_state_function_test_sb 00:18:52.529 ************************************ 00:18:52.529 12:00:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:18:52.529 12:00:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:18:52.529 12:00:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:52.529 12:00:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.529 ************************************ 00:18:52.529 START TEST raid_superblock_test 00:18:52.529 ************************************ 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 3 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=140640 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 140640 /var/tmp/spdk-raid.sock 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 140640 ']' 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:52.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:52.529 12:00:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.529 [2024-07-21 12:00:51.293294] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:52.529 [2024-07-21 12:00:51.293580] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140640 ] 00:18:52.788 [2024-07-21 12:00:51.465452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.788 [2024-07-21 12:00:51.559973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.788 [2024-07-21 12:00:51.618473] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:53.723 malloc1 00:18:53.723 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:53.992 [2024-07-21 12:00:52.690207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:53.992 [2024-07-21 12:00:52.690381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.992 [2024-07-21 12:00:52.690442] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:18:53.992 [2024-07-21 12:00:52.690498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.992 [2024-07-21 12:00:52.693249] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.992 [2024-07-21 12:00:52.693338] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:53.992 pt1 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:53.992 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:54.307 malloc2 00:18:54.307 12:00:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:54.580 [2024-07-21 12:00:53.233030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:54.580 [2024-07-21 12:00:53.233171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.580 [2024-07-21 12:00:53.233240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:18:54.580 [2024-07-21 12:00:53.233280] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.580 [2024-07-21 12:00:53.236120] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.580 [2024-07-21 12:00:53.236179] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:54.580 pt2 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:54.580 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:54.838 malloc3 00:18:54.838 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:55.096 [2024-07-21 12:00:53.768636] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:55.096 [2024-07-21 12:00:53.768775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.096 [2024-07-21 12:00:53.768832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:18:55.096 [2024-07-21 12:00:53.768890] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.096 [2024-07-21 12:00:53.771535] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.096 [2024-07-21 12:00:53.771597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:55.096 pt3 00:18:55.096 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:18:55.096 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:18:55.096 12:00:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:18:55.354 [2024-07-21 12:00:54.036725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:55.354 [2024-07-21 12:00:54.039113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.354 [2024-07-21 12:00:54.039208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:55.354 [2024-07-21 12:00:54.039484] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:18:55.354 [2024-07-21 12:00:54.039511] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:55.354 [2024-07-21 12:00:54.039710] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:18:55.354 [2024-07-21 12:00:54.040180] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:18:55.355 [2024-07-21 12:00:54.040218] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:18:55.355 [2024-07-21 12:00:54.040389] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.355 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.612 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:55.612 "name": "raid_bdev1", 00:18:55.612 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:18:55.612 "strip_size_kb": 64, 00:18:55.612 "state": "online", 00:18:55.612 "raid_level": "concat", 00:18:55.612 "superblock": true, 00:18:55.612 "num_base_bdevs": 3, 00:18:55.612 "num_base_bdevs_discovered": 3, 00:18:55.612 "num_base_bdevs_operational": 3, 00:18:55.612 "base_bdevs_list": [ 00:18:55.612 { 00:18:55.612 "name": "pt1", 00:18:55.612 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:18:55.612 "is_configured": true, 00:18:55.612 "data_offset": 2048, 00:18:55.612 "data_size": 63488 00:18:55.612 }, 00:18:55.612 { 00:18:55.612 "name": "pt2", 00:18:55.612 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:18:55.612 "is_configured": true, 00:18:55.612 "data_offset": 2048, 00:18:55.612 "data_size": 63488 00:18:55.613 }, 00:18:55.613 { 00:18:55.613 "name": "pt3", 00:18:55.613 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:18:55.613 "is_configured": true, 00:18:55.613 "data_offset": 2048, 00:18:55.613 "data_size": 63488 00:18:55.613 } 00:18:55.613 ] 00:18:55.613 }' 00:18:55.613 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:55.613 12:00:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:56.178 12:00:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:56.435 [2024-07-21 12:00:55.145219] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.435 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:56.435 "name": "raid_bdev1", 00:18:56.435 "aliases": [ 00:18:56.435 "489c2775-e151-441b-8325-17eb52728351" 00:18:56.435 ], 00:18:56.435 "product_name": "Raid Volume", 00:18:56.435 "block_size": 512, 00:18:56.435 "num_blocks": 190464, 00:18:56.435 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:18:56.435 "assigned_rate_limits": { 00:18:56.435 "rw_ios_per_sec": 0, 00:18:56.435 "rw_mbytes_per_sec": 0, 00:18:56.435 "r_mbytes_per_sec": 0, 00:18:56.435 "w_mbytes_per_sec": 0 00:18:56.435 }, 00:18:56.435 "claimed": false, 00:18:56.435 "zoned": false, 00:18:56.435 "supported_io_types": { 00:18:56.435 "read": true, 00:18:56.435 "write": true, 00:18:56.435 "unmap": true, 00:18:56.435 "write_zeroes": true, 00:18:56.435 "flush": true, 00:18:56.435 "reset": true, 00:18:56.435 "compare": false, 00:18:56.435 "compare_and_write": false, 00:18:56.435 "abort": false, 00:18:56.435 "nvme_admin": false, 00:18:56.435 "nvme_io": false 00:18:56.435 }, 00:18:56.435 "memory_domains": [ 00:18:56.435 { 00:18:56.435 "dma_device_id": "system", 00:18:56.435 "dma_device_type": 1 00:18:56.435 }, 00:18:56.435 { 00:18:56.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.435 "dma_device_type": 2 00:18:56.435 }, 00:18:56.435 { 00:18:56.435 "dma_device_id": "system", 00:18:56.435 "dma_device_type": 1 00:18:56.435 }, 00:18:56.435 { 00:18:56.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.435 "dma_device_type": 2 00:18:56.435 }, 00:18:56.435 { 00:18:56.435 "dma_device_id": "system", 00:18:56.435 "dma_device_type": 1 00:18:56.435 }, 00:18:56.435 { 00:18:56.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.435 "dma_device_type": 2 00:18:56.435 } 00:18:56.435 ], 00:18:56.435 "driver_specific": { 00:18:56.435 "raid": { 00:18:56.435 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:18:56.435 "strip_size_kb": 64, 00:18:56.435 "state": "online", 00:18:56.435 "raid_level": "concat", 00:18:56.435 "superblock": true, 00:18:56.435 "num_base_bdevs": 3, 00:18:56.435 "num_base_bdevs_discovered": 3, 00:18:56.435 "num_base_bdevs_operational": 3, 00:18:56.435 "base_bdevs_list": [ 00:18:56.435 { 00:18:56.435 "name": "pt1", 00:18:56.435 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:18:56.435 "is_configured": true, 00:18:56.435 "data_offset": 2048, 00:18:56.435 "data_size": 63488 00:18:56.435 }, 00:18:56.435 { 00:18:56.435 "name": "pt2", 00:18:56.435 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:18:56.435 "is_configured": true, 00:18:56.435 "data_offset": 2048, 00:18:56.435 "data_size": 63488 00:18:56.435 }, 00:18:56.435 { 00:18:56.435 "name": "pt3", 00:18:56.435 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:18:56.435 "is_configured": true, 00:18:56.435 "data_offset": 2048, 00:18:56.435 "data_size": 63488 00:18:56.435 } 00:18:56.435 ] 00:18:56.435 } 00:18:56.436 } 00:18:56.436 }' 00:18:56.436 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:56.436 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:56.436 pt2 00:18:56.436 pt3' 00:18:56.436 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:56.436 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:56.436 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:56.694 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:56.694 "name": "pt1", 00:18:56.694 "aliases": [ 00:18:56.694 "b96e080e-8217-5618-8a51-d0fa6cedc963" 00:18:56.694 ], 00:18:56.694 "product_name": "passthru", 00:18:56.694 "block_size": 512, 00:18:56.694 "num_blocks": 65536, 00:18:56.694 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:18:56.694 "assigned_rate_limits": { 00:18:56.694 "rw_ios_per_sec": 0, 00:18:56.694 "rw_mbytes_per_sec": 0, 00:18:56.694 "r_mbytes_per_sec": 0, 00:18:56.694 "w_mbytes_per_sec": 0 00:18:56.694 }, 00:18:56.694 "claimed": true, 00:18:56.694 "claim_type": "exclusive_write", 00:18:56.694 "zoned": false, 00:18:56.694 "supported_io_types": { 00:18:56.694 "read": true, 00:18:56.694 "write": true, 00:18:56.694 "unmap": true, 00:18:56.694 "write_zeroes": true, 00:18:56.694 "flush": true, 00:18:56.694 "reset": true, 00:18:56.694 "compare": false, 00:18:56.694 "compare_and_write": false, 00:18:56.694 "abort": true, 00:18:56.694 "nvme_admin": false, 00:18:56.694 "nvme_io": false 00:18:56.694 }, 00:18:56.694 "memory_domains": [ 00:18:56.694 { 00:18:56.694 "dma_device_id": "system", 00:18:56.694 "dma_device_type": 1 00:18:56.694 }, 00:18:56.694 { 00:18:56.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:56.694 "dma_device_type": 2 00:18:56.694 } 00:18:56.694 ], 00:18:56.694 "driver_specific": { 00:18:56.694 "passthru": { 00:18:56.694 "name": "pt1", 00:18:56.694 "base_bdev_name": "malloc1" 00:18:56.694 } 00:18:56.694 } 00:18:56.694 }' 00:18:56.694 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.694 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:56.694 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:56.694 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.952 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:56.952 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:56.952 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.952 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:56.952 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:56.952 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:56.952 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.210 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.210 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:57.210 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:57.210 12:00:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.467 "name": "pt2", 00:18:57.467 "aliases": [ 00:18:57.467 "8c027b83-aa5c-5bfb-aef1-7d6db159580d" 00:18:57.467 ], 00:18:57.467 "product_name": "passthru", 00:18:57.467 "block_size": 512, 00:18:57.467 "num_blocks": 65536, 00:18:57.467 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:18:57.467 "assigned_rate_limits": { 00:18:57.467 "rw_ios_per_sec": 0, 00:18:57.467 "rw_mbytes_per_sec": 0, 00:18:57.467 "r_mbytes_per_sec": 0, 00:18:57.467 "w_mbytes_per_sec": 0 00:18:57.467 }, 00:18:57.467 "claimed": true, 00:18:57.467 "claim_type": "exclusive_write", 00:18:57.467 "zoned": false, 00:18:57.467 "supported_io_types": { 00:18:57.467 "read": true, 00:18:57.467 "write": true, 00:18:57.467 "unmap": true, 00:18:57.467 "write_zeroes": true, 00:18:57.467 "flush": true, 00:18:57.467 "reset": true, 00:18:57.467 "compare": false, 00:18:57.467 "compare_and_write": false, 00:18:57.467 "abort": true, 00:18:57.467 "nvme_admin": false, 00:18:57.467 "nvme_io": false 00:18:57.467 }, 00:18:57.467 "memory_domains": [ 00:18:57.467 { 00:18:57.467 "dma_device_id": "system", 00:18:57.467 "dma_device_type": 1 00:18:57.467 }, 00:18:57.467 { 00:18:57.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.467 "dma_device_type": 2 00:18:57.467 } 00:18:57.467 ], 00:18:57.467 "driver_specific": { 00:18:57.467 "passthru": { 00:18:57.467 "name": "pt2", 00:18:57.467 "base_bdev_name": "malloc2" 00:18:57.467 } 00:18:57.467 } 00:18:57.467 }' 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.467 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:57.725 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:57.725 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.725 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:57.725 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:57.725 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:57.725 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:18:57.725 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:57.982 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:57.982 "name": "pt3", 00:18:57.982 "aliases": [ 00:18:57.982 "d7065454-ae57-5d51-ad99-629eb697d198" 00:18:57.982 ], 00:18:57.982 "product_name": "passthru", 00:18:57.982 "block_size": 512, 00:18:57.982 "num_blocks": 65536, 00:18:57.982 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:18:57.982 "assigned_rate_limits": { 00:18:57.982 "rw_ios_per_sec": 0, 00:18:57.982 "rw_mbytes_per_sec": 0, 00:18:57.982 "r_mbytes_per_sec": 0, 00:18:57.982 "w_mbytes_per_sec": 0 00:18:57.982 }, 00:18:57.982 "claimed": true, 00:18:57.982 "claim_type": "exclusive_write", 00:18:57.982 "zoned": false, 00:18:57.982 "supported_io_types": { 00:18:57.982 "read": true, 00:18:57.982 "write": true, 00:18:57.982 "unmap": true, 00:18:57.982 "write_zeroes": true, 00:18:57.982 "flush": true, 00:18:57.982 "reset": true, 00:18:57.982 "compare": false, 00:18:57.982 "compare_and_write": false, 00:18:57.982 "abort": true, 00:18:57.982 "nvme_admin": false, 00:18:57.982 "nvme_io": false 00:18:57.982 }, 00:18:57.982 "memory_domains": [ 00:18:57.982 { 00:18:57.982 "dma_device_id": "system", 00:18:57.982 "dma_device_type": 1 00:18:57.982 }, 00:18:57.982 { 00:18:57.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:57.982 "dma_device_type": 2 00:18:57.982 } 00:18:57.982 ], 00:18:57.982 "driver_specific": { 00:18:57.982 "passthru": { 00:18:57.982 "name": "pt3", 00:18:57.982 "base_bdev_name": "malloc3" 00:18:57.982 } 00:18:57.982 } 00:18:57.982 }' 00:18:57.982 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.982 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:57.982 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:18:57.982 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.241 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:58.241 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:18:58.241 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.241 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:58.241 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:18:58.241 12:00:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.241 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:58.241 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:18:58.241 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:58.241 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:58.499 [2024-07-21 12:00:57.357612] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:58.758 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=489c2775-e151-441b-8325-17eb52728351 00:18:58.758 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 489c2775-e151-441b-8325-17eb52728351 ']' 00:18:58.758 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:58.758 [2024-07-21 12:00:57.577426] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:58.758 [2024-07-21 12:00:57.577469] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:58.758 [2024-07-21 12:00:57.577599] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:58.758 [2024-07-21 12:00:57.577690] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:58.758 [2024-07-21 12:00:57.577704] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:18:58.758 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.758 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:59.017 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:59.017 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:59.017 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.017 12:00:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:59.276 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.276 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:59.535 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:59.535 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:59.793 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:59.793 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:00.053 12:00:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:19:00.312 [2024-07-21 12:00:59.125701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:00.312 [2024-07-21 12:00:59.128034] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:00.312 [2024-07-21 12:00:59.128135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:00.312 [2024-07-21 12:00:59.128201] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:00.312 [2024-07-21 12:00:59.128297] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:00.312 [2024-07-21 12:00:59.128339] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:19:00.312 [2024-07-21 12:00:59.128394] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.312 [2024-07-21 12:00:59.128407] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:19:00.312 request: 00:19:00.312 { 00:19:00.312 "name": "raid_bdev1", 00:19:00.312 "raid_level": "concat", 00:19:00.312 "base_bdevs": [ 00:19:00.312 "malloc1", 00:19:00.312 "malloc2", 00:19:00.312 "malloc3" 00:19:00.312 ], 00:19:00.312 "superblock": false, 00:19:00.312 "strip_size_kb": 64, 00:19:00.312 "method": "bdev_raid_create", 00:19:00.312 "req_id": 1 00:19:00.312 } 00:19:00.312 Got JSON-RPC error response 00:19:00.312 response: 00:19:00.312 { 00:19:00.312 "code": -17, 00:19:00.312 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:00.312 } 00:19:00.312 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:19:00.312 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:00.312 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:00.312 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:00.312 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.312 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:19:00.571 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:19:00.571 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:19:00.571 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:00.830 [2024-07-21 12:00:59.637709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:00.830 [2024-07-21 12:00:59.637821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.830 [2024-07-21 12:00:59.637865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:00.830 [2024-07-21 12:00:59.637889] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.830 [2024-07-21 12:00:59.640474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.830 [2024-07-21 12:00:59.640543] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:00.830 [2024-07-21 12:00:59.640658] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:00.830 [2024-07-21 12:00:59.640727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:00.830 pt1 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.830 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.088 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:01.088 "name": "raid_bdev1", 00:19:01.088 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:19:01.088 "strip_size_kb": 64, 00:19:01.088 "state": "configuring", 00:19:01.088 "raid_level": "concat", 00:19:01.088 "superblock": true, 00:19:01.088 "num_base_bdevs": 3, 00:19:01.088 "num_base_bdevs_discovered": 1, 00:19:01.088 "num_base_bdevs_operational": 3, 00:19:01.088 "base_bdevs_list": [ 00:19:01.088 { 00:19:01.088 "name": "pt1", 00:19:01.088 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:19:01.088 "is_configured": true, 00:19:01.088 "data_offset": 2048, 00:19:01.088 "data_size": 63488 00:19:01.088 }, 00:19:01.088 { 00:19:01.089 "name": null, 00:19:01.089 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:19:01.089 "is_configured": false, 00:19:01.089 "data_offset": 2048, 00:19:01.089 "data_size": 63488 00:19:01.089 }, 00:19:01.089 { 00:19:01.089 "name": null, 00:19:01.089 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:19:01.089 "is_configured": false, 00:19:01.089 "data_offset": 2048, 00:19:01.089 "data_size": 63488 00:19:01.089 } 00:19:01.089 ] 00:19:01.089 }' 00:19:01.089 12:00:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:01.089 12:00:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.656 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:19:01.656 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:01.915 [2024-07-21 12:01:00.733984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:01.915 [2024-07-21 12:01:00.734106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.915 [2024-07-21 12:01:00.734152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:19:01.915 [2024-07-21 12:01:00.734175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.915 [2024-07-21 12:01:00.734671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.915 [2024-07-21 12:01:00.734705] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:01.915 [2024-07-21 12:01:00.734815] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:01.915 [2024-07-21 12:01:00.734842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:01.915 pt2 00:19:01.915 12:01:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:02.174 [2024-07-21 12:01:01.006079] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:02.174 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:02.433 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:02.433 "name": "raid_bdev1", 00:19:02.433 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:19:02.433 "strip_size_kb": 64, 00:19:02.433 "state": "configuring", 00:19:02.433 "raid_level": "concat", 00:19:02.433 "superblock": true, 00:19:02.433 "num_base_bdevs": 3, 00:19:02.433 "num_base_bdevs_discovered": 1, 00:19:02.433 "num_base_bdevs_operational": 3, 00:19:02.433 "base_bdevs_list": [ 00:19:02.433 { 00:19:02.433 "name": "pt1", 00:19:02.433 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:19:02.433 "is_configured": true, 00:19:02.433 "data_offset": 2048, 00:19:02.433 "data_size": 63488 00:19:02.433 }, 00:19:02.433 { 00:19:02.433 "name": null, 00:19:02.433 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:19:02.433 "is_configured": false, 00:19:02.433 "data_offset": 2048, 00:19:02.433 "data_size": 63488 00:19:02.433 }, 00:19:02.433 { 00:19:02.433 "name": null, 00:19:02.433 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:19:02.433 "is_configured": false, 00:19:02.433 "data_offset": 2048, 00:19:02.433 "data_size": 63488 00:19:02.433 } 00:19:02.433 ] 00:19:02.433 }' 00:19:02.433 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:02.433 12:01:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:03.370 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:19:03.370 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:03.370 12:01:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:03.370 [2024-07-21 12:01:02.190280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:03.370 [2024-07-21 12:01:02.190425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.370 [2024-07-21 12:01:02.190467] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:03.370 [2024-07-21 12:01:02.190496] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.370 [2024-07-21 12:01:02.191022] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.370 [2024-07-21 12:01:02.191073] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:03.370 [2024-07-21 12:01:02.191183] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:03.370 [2024-07-21 12:01:02.191211] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:03.370 pt2 00:19:03.370 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:03.370 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:03.370 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:03.628 [2024-07-21 12:01:02.478387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:03.628 [2024-07-21 12:01:02.478532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.628 [2024-07-21 12:01:02.478572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:03.628 [2024-07-21 12:01:02.478630] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.628 [2024-07-21 12:01:02.479135] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.628 [2024-07-21 12:01:02.479186] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:03.628 [2024-07-21 12:01:02.479300] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:19:03.628 [2024-07-21 12:01:02.479329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:03.628 [2024-07-21 12:01:02.479475] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:19:03.628 [2024-07-21 12:01:02.479500] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:03.628 [2024-07-21 12:01:02.479592] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:03.628 [2024-07-21 12:01:02.479928] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:19:03.628 [2024-07-21 12:01:02.479953] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:19:03.628 [2024-07-21 12:01:02.480087] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.628 pt3 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.886 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:04.144 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:04.144 "name": "raid_bdev1", 00:19:04.144 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:19:04.144 "strip_size_kb": 64, 00:19:04.144 "state": "online", 00:19:04.144 "raid_level": "concat", 00:19:04.144 "superblock": true, 00:19:04.144 "num_base_bdevs": 3, 00:19:04.144 "num_base_bdevs_discovered": 3, 00:19:04.144 "num_base_bdevs_operational": 3, 00:19:04.144 "base_bdevs_list": [ 00:19:04.144 { 00:19:04.145 "name": "pt1", 00:19:04.145 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:19:04.145 "is_configured": true, 00:19:04.145 "data_offset": 2048, 00:19:04.145 "data_size": 63488 00:19:04.145 }, 00:19:04.145 { 00:19:04.145 "name": "pt2", 00:19:04.145 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:19:04.145 "is_configured": true, 00:19:04.145 "data_offset": 2048, 00:19:04.145 "data_size": 63488 00:19:04.145 }, 00:19:04.145 { 00:19:04.145 "name": "pt3", 00:19:04.145 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:19:04.145 "is_configured": true, 00:19:04.145 "data_offset": 2048, 00:19:04.145 "data_size": 63488 00:19:04.145 } 00:19:04.145 ] 00:19:04.145 }' 00:19:04.145 12:01:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:04.145 12:01:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:04.710 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:04.968 [2024-07-21 12:01:03.690042] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:04.968 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:04.968 "name": "raid_bdev1", 00:19:04.968 "aliases": [ 00:19:04.968 "489c2775-e151-441b-8325-17eb52728351" 00:19:04.968 ], 00:19:04.968 "product_name": "Raid Volume", 00:19:04.968 "block_size": 512, 00:19:04.968 "num_blocks": 190464, 00:19:04.968 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:19:04.968 "assigned_rate_limits": { 00:19:04.968 "rw_ios_per_sec": 0, 00:19:04.968 "rw_mbytes_per_sec": 0, 00:19:04.968 "r_mbytes_per_sec": 0, 00:19:04.968 "w_mbytes_per_sec": 0 00:19:04.968 }, 00:19:04.968 "claimed": false, 00:19:04.968 "zoned": false, 00:19:04.968 "supported_io_types": { 00:19:04.968 "read": true, 00:19:04.968 "write": true, 00:19:04.968 "unmap": true, 00:19:04.968 "write_zeroes": true, 00:19:04.968 "flush": true, 00:19:04.968 "reset": true, 00:19:04.968 "compare": false, 00:19:04.968 "compare_and_write": false, 00:19:04.968 "abort": false, 00:19:04.968 "nvme_admin": false, 00:19:04.968 "nvme_io": false 00:19:04.968 }, 00:19:04.968 "memory_domains": [ 00:19:04.968 { 00:19:04.968 "dma_device_id": "system", 00:19:04.968 "dma_device_type": 1 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.968 "dma_device_type": 2 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "dma_device_id": "system", 00:19:04.968 "dma_device_type": 1 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.968 "dma_device_type": 2 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "dma_device_id": "system", 00:19:04.968 "dma_device_type": 1 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:04.968 "dma_device_type": 2 00:19:04.968 } 00:19:04.968 ], 00:19:04.968 "driver_specific": { 00:19:04.968 "raid": { 00:19:04.968 "uuid": "489c2775-e151-441b-8325-17eb52728351", 00:19:04.968 "strip_size_kb": 64, 00:19:04.968 "state": "online", 00:19:04.968 "raid_level": "concat", 00:19:04.968 "superblock": true, 00:19:04.968 "num_base_bdevs": 3, 00:19:04.968 "num_base_bdevs_discovered": 3, 00:19:04.968 "num_base_bdevs_operational": 3, 00:19:04.968 "base_bdevs_list": [ 00:19:04.968 { 00:19:04.968 "name": "pt1", 00:19:04.968 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:19:04.968 "is_configured": true, 00:19:04.968 "data_offset": 2048, 00:19:04.968 "data_size": 63488 00:19:04.968 }, 00:19:04.968 { 00:19:04.968 "name": "pt2", 00:19:04.968 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:19:04.968 "is_configured": true, 00:19:04.968 "data_offset": 2048, 00:19:04.969 "data_size": 63488 00:19:04.969 }, 00:19:04.969 { 00:19:04.969 "name": "pt3", 00:19:04.969 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:19:04.969 "is_configured": true, 00:19:04.969 "data_offset": 2048, 00:19:04.969 "data_size": 63488 00:19:04.969 } 00:19:04.969 ] 00:19:04.969 } 00:19:04.969 } 00:19:04.969 }' 00:19:04.969 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:04.969 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:19:04.969 pt2 00:19:04.969 pt3' 00:19:04.969 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:04.969 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:19:04.969 12:01:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.227 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.227 "name": "pt1", 00:19:05.227 "aliases": [ 00:19:05.227 "b96e080e-8217-5618-8a51-d0fa6cedc963" 00:19:05.227 ], 00:19:05.227 "product_name": "passthru", 00:19:05.227 "block_size": 512, 00:19:05.227 "num_blocks": 65536, 00:19:05.227 "uuid": "b96e080e-8217-5618-8a51-d0fa6cedc963", 00:19:05.227 "assigned_rate_limits": { 00:19:05.227 "rw_ios_per_sec": 0, 00:19:05.227 "rw_mbytes_per_sec": 0, 00:19:05.227 "r_mbytes_per_sec": 0, 00:19:05.227 "w_mbytes_per_sec": 0 00:19:05.227 }, 00:19:05.227 "claimed": true, 00:19:05.227 "claim_type": "exclusive_write", 00:19:05.227 "zoned": false, 00:19:05.227 "supported_io_types": { 00:19:05.227 "read": true, 00:19:05.227 "write": true, 00:19:05.227 "unmap": true, 00:19:05.227 "write_zeroes": true, 00:19:05.227 "flush": true, 00:19:05.227 "reset": true, 00:19:05.227 "compare": false, 00:19:05.227 "compare_and_write": false, 00:19:05.227 "abort": true, 00:19:05.227 "nvme_admin": false, 00:19:05.227 "nvme_io": false 00:19:05.227 }, 00:19:05.227 "memory_domains": [ 00:19:05.227 { 00:19:05.227 "dma_device_id": "system", 00:19:05.227 "dma_device_type": 1 00:19:05.227 }, 00:19:05.227 { 00:19:05.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.227 "dma_device_type": 2 00:19:05.227 } 00:19:05.227 ], 00:19:05.227 "driver_specific": { 00:19:05.227 "passthru": { 00:19:05.227 "name": "pt1", 00:19:05.227 "base_bdev_name": "malloc1" 00:19:05.227 } 00:19:05.227 } 00:19:05.227 }' 00:19:05.227 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.227 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:05.485 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.742 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:05.742 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:05.742 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:05.742 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:19:05.742 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:05.999 "name": "pt2", 00:19:05.999 "aliases": [ 00:19:05.999 "8c027b83-aa5c-5bfb-aef1-7d6db159580d" 00:19:05.999 ], 00:19:05.999 "product_name": "passthru", 00:19:05.999 "block_size": 512, 00:19:05.999 "num_blocks": 65536, 00:19:05.999 "uuid": "8c027b83-aa5c-5bfb-aef1-7d6db159580d", 00:19:05.999 "assigned_rate_limits": { 00:19:05.999 "rw_ios_per_sec": 0, 00:19:05.999 "rw_mbytes_per_sec": 0, 00:19:05.999 "r_mbytes_per_sec": 0, 00:19:05.999 "w_mbytes_per_sec": 0 00:19:05.999 }, 00:19:05.999 "claimed": true, 00:19:05.999 "claim_type": "exclusive_write", 00:19:05.999 "zoned": false, 00:19:05.999 "supported_io_types": { 00:19:05.999 "read": true, 00:19:05.999 "write": true, 00:19:05.999 "unmap": true, 00:19:05.999 "write_zeroes": true, 00:19:05.999 "flush": true, 00:19:05.999 "reset": true, 00:19:05.999 "compare": false, 00:19:05.999 "compare_and_write": false, 00:19:05.999 "abort": true, 00:19:05.999 "nvme_admin": false, 00:19:05.999 "nvme_io": false 00:19:05.999 }, 00:19:05.999 "memory_domains": [ 00:19:05.999 { 00:19:05.999 "dma_device_id": "system", 00:19:05.999 "dma_device_type": 1 00:19:05.999 }, 00:19:05.999 { 00:19:05.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.999 "dma_device_type": 2 00:19:05.999 } 00:19:05.999 ], 00:19:05.999 "driver_specific": { 00:19:05.999 "passthru": { 00:19:05.999 "name": "pt2", 00:19:05.999 "base_bdev_name": "malloc2" 00:19:05.999 } 00:19:05.999 } 00:19:05.999 }' 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:05.999 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.257 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.257 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.257 12:01:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.257 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:06.257 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:06.257 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:06.257 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:19:06.257 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:06.515 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:06.515 "name": "pt3", 00:19:06.515 "aliases": [ 00:19:06.515 "d7065454-ae57-5d51-ad99-629eb697d198" 00:19:06.515 ], 00:19:06.515 "product_name": "passthru", 00:19:06.515 "block_size": 512, 00:19:06.515 "num_blocks": 65536, 00:19:06.515 "uuid": "d7065454-ae57-5d51-ad99-629eb697d198", 00:19:06.515 "assigned_rate_limits": { 00:19:06.515 "rw_ios_per_sec": 0, 00:19:06.515 "rw_mbytes_per_sec": 0, 00:19:06.515 "r_mbytes_per_sec": 0, 00:19:06.515 "w_mbytes_per_sec": 0 00:19:06.515 }, 00:19:06.515 "claimed": true, 00:19:06.515 "claim_type": "exclusive_write", 00:19:06.515 "zoned": false, 00:19:06.515 "supported_io_types": { 00:19:06.515 "read": true, 00:19:06.515 "write": true, 00:19:06.515 "unmap": true, 00:19:06.515 "write_zeroes": true, 00:19:06.515 "flush": true, 00:19:06.515 "reset": true, 00:19:06.515 "compare": false, 00:19:06.515 "compare_and_write": false, 00:19:06.515 "abort": true, 00:19:06.515 "nvme_admin": false, 00:19:06.515 "nvme_io": false 00:19:06.515 }, 00:19:06.515 "memory_domains": [ 00:19:06.515 { 00:19:06.515 "dma_device_id": "system", 00:19:06.515 "dma_device_type": 1 00:19:06.515 }, 00:19:06.515 { 00:19:06.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:06.515 "dma_device_type": 2 00:19:06.515 } 00:19:06.515 ], 00:19:06.515 "driver_specific": { 00:19:06.515 "passthru": { 00:19:06.515 "name": "pt3", 00:19:06.515 "base_bdev_name": "malloc3" 00:19:06.515 } 00:19:06.515 } 00:19:06.515 }' 00:19:06.515 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.515 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:06.773 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.032 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:07.032 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:07.032 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:07.032 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:19:07.291 [2024-07-21 12:01:05.939103] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 489c2775-e151-441b-8325-17eb52728351 '!=' 489c2775-e151-441b-8325-17eb52728351 ']' 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 140640 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 140640 ']' 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 140640 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 140640 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 140640' 00:19:07.291 killing process with pid 140640 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 140640 00:19:07.291 [2024-07-21 12:01:05.983814] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:07.291 [2024-07-21 12:01:05.983928] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:07.291 [2024-07-21 12:01:05.984002] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:07.291 [2024-07-21 12:01:05.984021] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:19:07.291 12:01:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 140640 00:19:07.291 [2024-07-21 12:01:06.021225] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:07.550 ************************************ 00:19:07.550 END TEST raid_superblock_test 00:19:07.550 ************************************ 00:19:07.550 12:01:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:19:07.550 00:19:07.550 real 0m15.028s 00:19:07.550 user 0m27.985s 00:19:07.550 sys 0m1.886s 00:19:07.550 12:01:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:07.550 12:01:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.550 12:01:06 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:19:07.550 12:01:06 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:07.550 12:01:06 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:07.550 12:01:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:07.550 ************************************ 00:19:07.550 START TEST raid_read_error_test 00:19:07.550 ************************************ 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 read 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.ATQu3QeY8P 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=141123 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 141123 /var/tmp/spdk-raid.sock 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 141123 ']' 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:07.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:07.550 12:01:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:07.550 [2024-07-21 12:01:06.386422] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:07.550 [2024-07-21 12:01:06.386693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141123 ] 00:19:07.808 [2024-07-21 12:01:06.556901] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.808 [2024-07-21 12:01:06.648561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.066 [2024-07-21 12:01:06.707060] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:08.632 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:08.632 12:01:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:19:08.632 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:08.632 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:08.889 BaseBdev1_malloc 00:19:08.889 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:09.147 true 00:19:09.147 12:01:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:09.405 [2024-07-21 12:01:08.087029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:09.405 [2024-07-21 12:01:08.087188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:09.405 [2024-07-21 12:01:08.087242] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:09.405 [2024-07-21 12:01:08.087297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:09.405 [2024-07-21 12:01:08.090043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:09.405 [2024-07-21 12:01:08.090124] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:09.405 BaseBdev1 00:19:09.405 12:01:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:09.405 12:01:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:09.663 BaseBdev2_malloc 00:19:09.663 12:01:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:09.923 true 00:19:09.923 12:01:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:10.182 [2024-07-21 12:01:08.861747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:10.182 [2024-07-21 12:01:08.861887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.182 [2024-07-21 12:01:08.861956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:10.182 [2024-07-21 12:01:08.861997] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.182 [2024-07-21 12:01:08.864704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.182 [2024-07-21 12:01:08.864783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:10.182 BaseBdev2 00:19:10.182 12:01:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:10.182 12:01:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:10.441 BaseBdev3_malloc 00:19:10.441 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:10.698 true 00:19:10.698 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:10.698 [2024-07-21 12:01:09.561115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:10.698 [2024-07-21 12:01:09.561250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:10.698 [2024-07-21 12:01:09.561301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:10.698 [2024-07-21 12:01:09.561353] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:10.698 [2024-07-21 12:01:09.563942] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:10.698 [2024-07-21 12:01:09.564016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:10.957 BaseBdev3 00:19:10.957 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:11.215 [2024-07-21 12:01:09.825325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:11.215 [2024-07-21 12:01:09.827700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:11.215 [2024-07-21 12:01:09.827805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:11.215 [2024-07-21 12:01:09.828094] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:19:11.215 [2024-07-21 12:01:09.828122] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:11.215 [2024-07-21 12:01:09.828303] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:11.215 [2024-07-21 12:01:09.828776] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:19:11.215 [2024-07-21 12:01:09.828802] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:19:11.215 [2024-07-21 12:01:09.828980] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.215 12:01:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.473 12:01:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:11.473 "name": "raid_bdev1", 00:19:11.473 "uuid": "a07aa7d6-ac61-41d5-a379-30bd390b2d5d", 00:19:11.473 "strip_size_kb": 64, 00:19:11.473 "state": "online", 00:19:11.473 "raid_level": "concat", 00:19:11.473 "superblock": true, 00:19:11.473 "num_base_bdevs": 3, 00:19:11.473 "num_base_bdevs_discovered": 3, 00:19:11.473 "num_base_bdevs_operational": 3, 00:19:11.473 "base_bdevs_list": [ 00:19:11.473 { 00:19:11.473 "name": "BaseBdev1", 00:19:11.473 "uuid": "d973d867-d352-5a12-bfd1-0dcfd9eb32b7", 00:19:11.473 "is_configured": true, 00:19:11.473 "data_offset": 2048, 00:19:11.473 "data_size": 63488 00:19:11.473 }, 00:19:11.473 { 00:19:11.473 "name": "BaseBdev2", 00:19:11.473 "uuid": "429a8749-1c9c-52a7-a1a8-f25228acc686", 00:19:11.473 "is_configured": true, 00:19:11.473 "data_offset": 2048, 00:19:11.473 "data_size": 63488 00:19:11.473 }, 00:19:11.473 { 00:19:11.473 "name": "BaseBdev3", 00:19:11.473 "uuid": "d619c776-116e-5909-ab70-67d10e39592f", 00:19:11.473 "is_configured": true, 00:19:11.473 "data_offset": 2048, 00:19:11.473 "data_size": 63488 00:19:11.473 } 00:19:11.473 ] 00:19:11.473 }' 00:19:11.473 12:01:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:11.473 12:01:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:12.040 12:01:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:12.040 12:01:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:12.040 [2024-07-21 12:01:10.770103] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:12.975 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.233 12:01:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.508 12:01:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:13.508 "name": "raid_bdev1", 00:19:13.508 "uuid": "a07aa7d6-ac61-41d5-a379-30bd390b2d5d", 00:19:13.508 "strip_size_kb": 64, 00:19:13.508 "state": "online", 00:19:13.508 "raid_level": "concat", 00:19:13.508 "superblock": true, 00:19:13.508 "num_base_bdevs": 3, 00:19:13.508 "num_base_bdevs_discovered": 3, 00:19:13.508 "num_base_bdevs_operational": 3, 00:19:13.508 "base_bdevs_list": [ 00:19:13.508 { 00:19:13.508 "name": "BaseBdev1", 00:19:13.508 "uuid": "d973d867-d352-5a12-bfd1-0dcfd9eb32b7", 00:19:13.508 "is_configured": true, 00:19:13.508 "data_offset": 2048, 00:19:13.508 "data_size": 63488 00:19:13.508 }, 00:19:13.508 { 00:19:13.508 "name": "BaseBdev2", 00:19:13.508 "uuid": "429a8749-1c9c-52a7-a1a8-f25228acc686", 00:19:13.508 "is_configured": true, 00:19:13.508 "data_offset": 2048, 00:19:13.508 "data_size": 63488 00:19:13.508 }, 00:19:13.508 { 00:19:13.508 "name": "BaseBdev3", 00:19:13.508 "uuid": "d619c776-116e-5909-ab70-67d10e39592f", 00:19:13.508 "is_configured": true, 00:19:13.508 "data_offset": 2048, 00:19:13.508 "data_size": 63488 00:19:13.508 } 00:19:13.508 ] 00:19:13.508 }' 00:19:13.508 12:01:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:13.508 12:01:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.086 12:01:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:14.344 [2024-07-21 12:01:13.128784] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:14.344 [2024-07-21 12:01:13.128834] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.344 [2024-07-21 12:01:13.131937] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.344 [2024-07-21 12:01:13.132027] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.344 [2024-07-21 12:01:13.132084] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.344 [2024-07-21 12:01:13.132098] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:19:14.344 0 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 141123 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 141123 ']' 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 141123 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 141123 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 141123' 00:19:14.344 killing process with pid 141123 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 141123 00:19:14.344 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 141123 00:19:14.344 [2024-07-21 12:01:13.174273] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.344 [2024-07-21 12:01:13.199701] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.ATQu3QeY8P 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.42 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.42 != \0\.\0\0 ]] 00:19:14.601 00:19:14.601 real 0m7.143s 00:19:14.601 user 0m11.667s 00:19:14.601 sys 0m0.835s 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:14.601 12:01:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.601 ************************************ 00:19:14.601 END TEST raid_read_error_test 00:19:14.601 ************************************ 00:19:14.859 12:01:13 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:19:14.859 12:01:13 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:14.859 12:01:13 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:14.859 12:01:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:14.859 ************************************ 00:19:14.859 START TEST raid_write_error_test 00:19:14.859 ************************************ 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 3 write 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.N9lsgy2mqj 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=141316 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 141316 /var/tmp/spdk-raid.sock 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 141316 ']' 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:14.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:14.859 12:01:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:14.859 [2024-07-21 12:01:13.587729] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:14.859 [2024-07-21 12:01:13.588710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141316 ] 00:19:15.116 [2024-07-21 12:01:13.750768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.116 [2024-07-21 12:01:13.837379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.116 [2024-07-21 12:01:13.891527] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.076 12:01:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:16.076 12:01:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:19:16.076 12:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:16.076 12:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:16.076 BaseBdev1_malloc 00:19:16.076 12:01:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:19:16.335 true 00:19:16.335 12:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:19:16.594 [2024-07-21 12:01:15.435881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:19:16.594 [2024-07-21 12:01:15.436006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:16.594 [2024-07-21 12:01:15.436110] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:16.594 [2024-07-21 12:01:15.436168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:16.594 [2024-07-21 12:01:15.438932] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:16.594 [2024-07-21 12:01:15.439039] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:16.594 BaseBdev1 00:19:16.594 12:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:16.594 12:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:17.161 BaseBdev2_malloc 00:19:17.161 12:01:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:19:17.161 true 00:19:17.421 12:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:19:17.421 [2024-07-21 12:01:16.239309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:19:17.421 [2024-07-21 12:01:16.239451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.421 [2024-07-21 12:01:16.239519] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:19:17.421 [2024-07-21 12:01:16.239562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.421 [2024-07-21 12:01:16.242225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.421 [2024-07-21 12:01:16.242303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:17.421 BaseBdev2 00:19:17.421 12:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:19:17.421 12:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:17.678 BaseBdev3_malloc 00:19:17.678 12:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:19:17.936 true 00:19:17.936 12:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:19:18.200 [2024-07-21 12:01:16.947830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:19:18.200 [2024-07-21 12:01:16.948220] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.200 [2024-07-21 12:01:16.948394] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:18.200 [2024-07-21 12:01:16.948553] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.200 [2024-07-21 12:01:16.951308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.200 [2024-07-21 12:01:16.951499] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:18.200 BaseBdev3 00:19:18.200 12:01:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:19:18.460 [2024-07-21 12:01:17.216027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:18.460 [2024-07-21 12:01:17.218852] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:18.460 [2024-07-21 12:01:17.219088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:18.460 [2024-07-21 12:01:17.219492] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:19:18.460 [2024-07-21 12:01:17.219625] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:18.460 [2024-07-21 12:01:17.219847] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:19:18.460 [2024-07-21 12:01:17.220464] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:19:18.460 [2024-07-21 12:01:17.220597] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:19:18.460 [2024-07-21 12:01:17.220932] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:18.460 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.719 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:18.719 "name": "raid_bdev1", 00:19:18.719 "uuid": "5799c6fa-aacb-46b5-afdd-3ee3e0b6eb1d", 00:19:18.719 "strip_size_kb": 64, 00:19:18.719 "state": "online", 00:19:18.719 "raid_level": "concat", 00:19:18.719 "superblock": true, 00:19:18.719 "num_base_bdevs": 3, 00:19:18.719 "num_base_bdevs_discovered": 3, 00:19:18.719 "num_base_bdevs_operational": 3, 00:19:18.719 "base_bdevs_list": [ 00:19:18.719 { 00:19:18.720 "name": "BaseBdev1", 00:19:18.720 "uuid": "3caa5947-757b-5212-81be-a4311c05ad7d", 00:19:18.720 "is_configured": true, 00:19:18.720 "data_offset": 2048, 00:19:18.720 "data_size": 63488 00:19:18.720 }, 00:19:18.720 { 00:19:18.720 "name": "BaseBdev2", 00:19:18.720 "uuid": "785e5a50-9e60-5837-b05f-10ee629ef384", 00:19:18.720 "is_configured": true, 00:19:18.720 "data_offset": 2048, 00:19:18.720 "data_size": 63488 00:19:18.720 }, 00:19:18.720 { 00:19:18.720 "name": "BaseBdev3", 00:19:18.720 "uuid": "f18718b0-bbe4-5cfb-aaa6-15f352c8daa9", 00:19:18.720 "is_configured": true, 00:19:18.720 "data_offset": 2048, 00:19:18.720 "data_size": 63488 00:19:18.720 } 00:19:18.720 ] 00:19:18.720 }' 00:19:18.720 12:01:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:18.720 12:01:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:19.288 12:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:19:19.288 12:01:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:19.547 [2024-07-21 12:01:18.165536] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:20.483 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.742 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.001 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:21.001 "name": "raid_bdev1", 00:19:21.001 "uuid": "5799c6fa-aacb-46b5-afdd-3ee3e0b6eb1d", 00:19:21.001 "strip_size_kb": 64, 00:19:21.001 "state": "online", 00:19:21.001 "raid_level": "concat", 00:19:21.001 "superblock": true, 00:19:21.001 "num_base_bdevs": 3, 00:19:21.001 "num_base_bdevs_discovered": 3, 00:19:21.001 "num_base_bdevs_operational": 3, 00:19:21.001 "base_bdevs_list": [ 00:19:21.001 { 00:19:21.001 "name": "BaseBdev1", 00:19:21.001 "uuid": "3caa5947-757b-5212-81be-a4311c05ad7d", 00:19:21.001 "is_configured": true, 00:19:21.001 "data_offset": 2048, 00:19:21.001 "data_size": 63488 00:19:21.001 }, 00:19:21.001 { 00:19:21.001 "name": "BaseBdev2", 00:19:21.001 "uuid": "785e5a50-9e60-5837-b05f-10ee629ef384", 00:19:21.001 "is_configured": true, 00:19:21.001 "data_offset": 2048, 00:19:21.001 "data_size": 63488 00:19:21.001 }, 00:19:21.001 { 00:19:21.001 "name": "BaseBdev3", 00:19:21.001 "uuid": "f18718b0-bbe4-5cfb-aaa6-15f352c8daa9", 00:19:21.001 "is_configured": true, 00:19:21.001 "data_offset": 2048, 00:19:21.001 "data_size": 63488 00:19:21.001 } 00:19:21.001 ] 00:19:21.001 }' 00:19:21.001 12:01:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:21.001 12:01:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:21.569 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:21.828 [2024-07-21 12:01:20.492675] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.828 [2024-07-21 12:01:20.493828] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.828 [2024-07-21 12:01:20.496772] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.828 [2024-07-21 12:01:20.497037] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.828 [2024-07-21 12:01:20.497240] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.828 [2024-07-21 12:01:20.497389] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:19:21.828 0 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 141316 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 141316 ']' 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 141316 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 141316 00:19:21.828 killing process with pid 141316 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 141316' 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 141316 00:19:21.828 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 141316 00:19:21.828 [2024-07-21 12:01:20.536388] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:21.828 [2024-07-21 12:01:20.561652] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.N9lsgy2mqj 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:19:22.086 ************************************ 00:19:22.086 END TEST raid_write_error_test 00:19:22.086 ************************************ 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:19:22.086 00:19:22.086 real 0m7.304s 00:19:22.086 user 0m12.026s 00:19:22.086 sys 0m0.812s 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:22.086 12:01:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.086 12:01:20 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:19:22.086 12:01:20 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:19:22.086 12:01:20 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:22.086 12:01:20 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:22.086 12:01:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.086 ************************************ 00:19:22.086 START TEST raid_state_function_test 00:19:22.086 ************************************ 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 false 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=141509 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:22.086 Process raid pid: 141509 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 141509' 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 141509 /var/tmp/spdk-raid.sock 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 141509 ']' 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:22.086 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:22.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:22.087 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:22.087 12:01:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:22.087 [2024-07-21 12:01:20.937730] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:22.087 [2024-07-21 12:01:20.938179] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.345 [2024-07-21 12:01:21.091313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.345 [2024-07-21 12:01:21.173902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.603 [2024-07-21 12:01:21.228772] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.169 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:23.169 12:01:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:19:23.169 12:01:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:23.428 [2024-07-21 12:01:22.187810] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:23.428 [2024-07-21 12:01:22.188268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:23.428 [2024-07-21 12:01:22.188399] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:23.428 [2024-07-21 12:01:22.188473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:23.428 [2024-07-21 12:01:22.188678] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:23.428 [2024-07-21 12:01:22.188778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:23.428 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:23.428 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:23.428 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:23.428 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.429 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:23.686 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:23.686 "name": "Existed_Raid", 00:19:23.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.686 "strip_size_kb": 0, 00:19:23.686 "state": "configuring", 00:19:23.686 "raid_level": "raid1", 00:19:23.686 "superblock": false, 00:19:23.686 "num_base_bdevs": 3, 00:19:23.686 "num_base_bdevs_discovered": 0, 00:19:23.686 "num_base_bdevs_operational": 3, 00:19:23.686 "base_bdevs_list": [ 00:19:23.686 { 00:19:23.686 "name": "BaseBdev1", 00:19:23.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.686 "is_configured": false, 00:19:23.686 "data_offset": 0, 00:19:23.686 "data_size": 0 00:19:23.687 }, 00:19:23.687 { 00:19:23.687 "name": "BaseBdev2", 00:19:23.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.687 "is_configured": false, 00:19:23.687 "data_offset": 0, 00:19:23.687 "data_size": 0 00:19:23.687 }, 00:19:23.687 { 00:19:23.687 "name": "BaseBdev3", 00:19:23.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.687 "is_configured": false, 00:19:23.687 "data_offset": 0, 00:19:23.687 "data_size": 0 00:19:23.687 } 00:19:23.687 ] 00:19:23.687 }' 00:19:23.687 12:01:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:23.687 12:01:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:24.619 12:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:24.619 [2024-07-21 12:01:23.367902] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:24.619 [2024-07-21 12:01:23.368303] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:24.619 12:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:24.877 [2024-07-21 12:01:23.583940] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:24.878 [2024-07-21 12:01:23.584255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:24.878 [2024-07-21 12:01:23.584368] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:24.878 [2024-07-21 12:01:23.584497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:24.878 [2024-07-21 12:01:23.584642] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:24.878 [2024-07-21 12:01:23.584714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:24.878 12:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:25.135 [2024-07-21 12:01:23.863109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:25.136 BaseBdev1 00:19:25.136 12:01:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:25.136 12:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:25.136 12:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:25.136 12:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:25.136 12:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:25.136 12:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:25.136 12:01:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:25.394 12:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:25.652 [ 00:19:25.652 { 00:19:25.652 "name": "BaseBdev1", 00:19:25.652 "aliases": [ 00:19:25.652 "b1a69507-3b1d-484b-9a26-be6dc0b9dd61" 00:19:25.652 ], 00:19:25.652 "product_name": "Malloc disk", 00:19:25.652 "block_size": 512, 00:19:25.652 "num_blocks": 65536, 00:19:25.652 "uuid": "b1a69507-3b1d-484b-9a26-be6dc0b9dd61", 00:19:25.652 "assigned_rate_limits": { 00:19:25.652 "rw_ios_per_sec": 0, 00:19:25.652 "rw_mbytes_per_sec": 0, 00:19:25.652 "r_mbytes_per_sec": 0, 00:19:25.652 "w_mbytes_per_sec": 0 00:19:25.652 }, 00:19:25.652 "claimed": true, 00:19:25.652 "claim_type": "exclusive_write", 00:19:25.652 "zoned": false, 00:19:25.652 "supported_io_types": { 00:19:25.652 "read": true, 00:19:25.652 "write": true, 00:19:25.652 "unmap": true, 00:19:25.652 "write_zeroes": true, 00:19:25.652 "flush": true, 00:19:25.652 "reset": true, 00:19:25.652 "compare": false, 00:19:25.652 "compare_and_write": false, 00:19:25.652 "abort": true, 00:19:25.652 "nvme_admin": false, 00:19:25.652 "nvme_io": false 00:19:25.652 }, 00:19:25.652 "memory_domains": [ 00:19:25.652 { 00:19:25.652 "dma_device_id": "system", 00:19:25.652 "dma_device_type": 1 00:19:25.652 }, 00:19:25.652 { 00:19:25.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:25.652 "dma_device_type": 2 00:19:25.652 } 00:19:25.652 ], 00:19:25.652 "driver_specific": {} 00:19:25.652 } 00:19:25.652 ] 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:25.652 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:25.911 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:25.911 "name": "Existed_Raid", 00:19:25.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.911 "strip_size_kb": 0, 00:19:25.911 "state": "configuring", 00:19:25.911 "raid_level": "raid1", 00:19:25.911 "superblock": false, 00:19:25.911 "num_base_bdevs": 3, 00:19:25.911 "num_base_bdevs_discovered": 1, 00:19:25.911 "num_base_bdevs_operational": 3, 00:19:25.911 "base_bdevs_list": [ 00:19:25.911 { 00:19:25.911 "name": "BaseBdev1", 00:19:25.911 "uuid": "b1a69507-3b1d-484b-9a26-be6dc0b9dd61", 00:19:25.911 "is_configured": true, 00:19:25.911 "data_offset": 0, 00:19:25.911 "data_size": 65536 00:19:25.911 }, 00:19:25.911 { 00:19:25.911 "name": "BaseBdev2", 00:19:25.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.911 "is_configured": false, 00:19:25.911 "data_offset": 0, 00:19:25.912 "data_size": 0 00:19:25.912 }, 00:19:25.912 { 00:19:25.912 "name": "BaseBdev3", 00:19:25.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.912 "is_configured": false, 00:19:25.912 "data_offset": 0, 00:19:25.912 "data_size": 0 00:19:25.912 } 00:19:25.912 ] 00:19:25.912 }' 00:19:25.912 12:01:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:25.912 12:01:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:26.846 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:26.846 [2024-07-21 12:01:25.591593] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:26.846 [2024-07-21 12:01:25.591964] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:26.846 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:27.103 [2024-07-21 12:01:25.871719] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:27.103 [2024-07-21 12:01:25.874150] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:27.103 [2024-07-21 12:01:25.874369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:27.103 [2024-07-21 12:01:25.874533] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:27.103 [2024-07-21 12:01:25.874749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:27.103 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:27.103 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:27.103 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:27.103 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:27.103 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:27.103 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:27.103 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:27.104 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:27.104 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:27.104 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:27.104 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:27.104 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:27.104 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.104 12:01:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:27.361 12:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:27.361 "name": "Existed_Raid", 00:19:27.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.361 "strip_size_kb": 0, 00:19:27.361 "state": "configuring", 00:19:27.361 "raid_level": "raid1", 00:19:27.361 "superblock": false, 00:19:27.361 "num_base_bdevs": 3, 00:19:27.361 "num_base_bdevs_discovered": 1, 00:19:27.361 "num_base_bdevs_operational": 3, 00:19:27.361 "base_bdevs_list": [ 00:19:27.361 { 00:19:27.361 "name": "BaseBdev1", 00:19:27.361 "uuid": "b1a69507-3b1d-484b-9a26-be6dc0b9dd61", 00:19:27.361 "is_configured": true, 00:19:27.361 "data_offset": 0, 00:19:27.361 "data_size": 65536 00:19:27.361 }, 00:19:27.361 { 00:19:27.361 "name": "BaseBdev2", 00:19:27.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.361 "is_configured": false, 00:19:27.361 "data_offset": 0, 00:19:27.361 "data_size": 0 00:19:27.361 }, 00:19:27.361 { 00:19:27.361 "name": "BaseBdev3", 00:19:27.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:27.361 "is_configured": false, 00:19:27.361 "data_offset": 0, 00:19:27.362 "data_size": 0 00:19:27.362 } 00:19:27.362 ] 00:19:27.362 }' 00:19:27.362 12:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:27.362 12:01:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:28.297 12:01:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:28.297 [2024-07-21 12:01:27.058760] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:28.297 BaseBdev2 00:19:28.297 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:28.297 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:28.297 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:28.297 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:28.297 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:28.297 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:28.297 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:28.556 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:28.815 [ 00:19:28.815 { 00:19:28.815 "name": "BaseBdev2", 00:19:28.815 "aliases": [ 00:19:28.815 "90241fca-2729-47f1-98e3-d583ba7a98e8" 00:19:28.815 ], 00:19:28.815 "product_name": "Malloc disk", 00:19:28.815 "block_size": 512, 00:19:28.815 "num_blocks": 65536, 00:19:28.815 "uuid": "90241fca-2729-47f1-98e3-d583ba7a98e8", 00:19:28.815 "assigned_rate_limits": { 00:19:28.815 "rw_ios_per_sec": 0, 00:19:28.815 "rw_mbytes_per_sec": 0, 00:19:28.815 "r_mbytes_per_sec": 0, 00:19:28.815 "w_mbytes_per_sec": 0 00:19:28.815 }, 00:19:28.815 "claimed": true, 00:19:28.815 "claim_type": "exclusive_write", 00:19:28.815 "zoned": false, 00:19:28.815 "supported_io_types": { 00:19:28.815 "read": true, 00:19:28.815 "write": true, 00:19:28.815 "unmap": true, 00:19:28.815 "write_zeroes": true, 00:19:28.815 "flush": true, 00:19:28.815 "reset": true, 00:19:28.816 "compare": false, 00:19:28.816 "compare_and_write": false, 00:19:28.816 "abort": true, 00:19:28.816 "nvme_admin": false, 00:19:28.816 "nvme_io": false 00:19:28.816 }, 00:19:28.816 "memory_domains": [ 00:19:28.816 { 00:19:28.816 "dma_device_id": "system", 00:19:28.816 "dma_device_type": 1 00:19:28.816 }, 00:19:28.816 { 00:19:28.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:28.816 "dma_device_type": 2 00:19:28.816 } 00:19:28.816 ], 00:19:28.816 "driver_specific": {} 00:19:28.816 } 00:19:28.816 ] 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:28.816 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:29.075 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:29.075 "name": "Existed_Raid", 00:19:29.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.075 "strip_size_kb": 0, 00:19:29.075 "state": "configuring", 00:19:29.075 "raid_level": "raid1", 00:19:29.075 "superblock": false, 00:19:29.075 "num_base_bdevs": 3, 00:19:29.075 "num_base_bdevs_discovered": 2, 00:19:29.075 "num_base_bdevs_operational": 3, 00:19:29.075 "base_bdevs_list": [ 00:19:29.075 { 00:19:29.075 "name": "BaseBdev1", 00:19:29.075 "uuid": "b1a69507-3b1d-484b-9a26-be6dc0b9dd61", 00:19:29.075 "is_configured": true, 00:19:29.075 "data_offset": 0, 00:19:29.075 "data_size": 65536 00:19:29.075 }, 00:19:29.075 { 00:19:29.075 "name": "BaseBdev2", 00:19:29.075 "uuid": "90241fca-2729-47f1-98e3-d583ba7a98e8", 00:19:29.075 "is_configured": true, 00:19:29.075 "data_offset": 0, 00:19:29.075 "data_size": 65536 00:19:29.075 }, 00:19:29.075 { 00:19:29.075 "name": "BaseBdev3", 00:19:29.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.075 "is_configured": false, 00:19:29.075 "data_offset": 0, 00:19:29.075 "data_size": 0 00:19:29.075 } 00:19:29.075 ] 00:19:29.075 }' 00:19:29.075 12:01:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:29.075 12:01:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:29.643 12:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:29.902 [2024-07-21 12:01:28.752174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:29.902 [2024-07-21 12:01:28.752552] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:19:29.902 [2024-07-21 12:01:28.752673] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:29.902 [2024-07-21 12:01:28.752873] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:29.902 [2024-07-21 12:01:28.753486] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:19:29.902 [2024-07-21 12:01:28.753662] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:19:29.902 [2024-07-21 12:01:28.754022] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.902 BaseBdev3 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:30.161 12:01:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:30.430 [ 00:19:30.430 { 00:19:30.430 "name": "BaseBdev3", 00:19:30.430 "aliases": [ 00:19:30.430 "6470c021-fa1c-46f2-80f7-ef42a48e1d22" 00:19:30.430 ], 00:19:30.430 "product_name": "Malloc disk", 00:19:30.430 "block_size": 512, 00:19:30.430 "num_blocks": 65536, 00:19:30.430 "uuid": "6470c021-fa1c-46f2-80f7-ef42a48e1d22", 00:19:30.430 "assigned_rate_limits": { 00:19:30.430 "rw_ios_per_sec": 0, 00:19:30.430 "rw_mbytes_per_sec": 0, 00:19:30.430 "r_mbytes_per_sec": 0, 00:19:30.430 "w_mbytes_per_sec": 0 00:19:30.430 }, 00:19:30.430 "claimed": true, 00:19:30.430 "claim_type": "exclusive_write", 00:19:30.430 "zoned": false, 00:19:30.430 "supported_io_types": { 00:19:30.430 "read": true, 00:19:30.430 "write": true, 00:19:30.430 "unmap": true, 00:19:30.430 "write_zeroes": true, 00:19:30.430 "flush": true, 00:19:30.430 "reset": true, 00:19:30.430 "compare": false, 00:19:30.430 "compare_and_write": false, 00:19:30.430 "abort": true, 00:19:30.430 "nvme_admin": false, 00:19:30.430 "nvme_io": false 00:19:30.430 }, 00:19:30.430 "memory_domains": [ 00:19:30.430 { 00:19:30.430 "dma_device_id": "system", 00:19:30.430 "dma_device_type": 1 00:19:30.430 }, 00:19:30.430 { 00:19:30.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:30.430 "dma_device_type": 2 00:19:30.430 } 00:19:30.430 ], 00:19:30.430 "driver_specific": {} 00:19:30.430 } 00:19:30.430 ] 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:30.430 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.703 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:30.703 "name": "Existed_Raid", 00:19:30.703 "uuid": "68b78e07-d84f-41b5-a610-177457eada3c", 00:19:30.703 "strip_size_kb": 0, 00:19:30.703 "state": "online", 00:19:30.703 "raid_level": "raid1", 00:19:30.703 "superblock": false, 00:19:30.703 "num_base_bdevs": 3, 00:19:30.703 "num_base_bdevs_discovered": 3, 00:19:30.703 "num_base_bdevs_operational": 3, 00:19:30.703 "base_bdevs_list": [ 00:19:30.703 { 00:19:30.703 "name": "BaseBdev1", 00:19:30.703 "uuid": "b1a69507-3b1d-484b-9a26-be6dc0b9dd61", 00:19:30.703 "is_configured": true, 00:19:30.703 "data_offset": 0, 00:19:30.703 "data_size": 65536 00:19:30.703 }, 00:19:30.703 { 00:19:30.703 "name": "BaseBdev2", 00:19:30.703 "uuid": "90241fca-2729-47f1-98e3-d583ba7a98e8", 00:19:30.703 "is_configured": true, 00:19:30.703 "data_offset": 0, 00:19:30.703 "data_size": 65536 00:19:30.703 }, 00:19:30.703 { 00:19:30.703 "name": "BaseBdev3", 00:19:30.703 "uuid": "6470c021-fa1c-46f2-80f7-ef42a48e1d22", 00:19:30.703 "is_configured": true, 00:19:30.703 "data_offset": 0, 00:19:30.703 "data_size": 65536 00:19:30.703 } 00:19:30.703 ] 00:19:30.703 }' 00:19:30.703 12:01:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:30.703 12:01:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:31.638 [2024-07-21 12:01:30.400415] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:31.638 "name": "Existed_Raid", 00:19:31.638 "aliases": [ 00:19:31.638 "68b78e07-d84f-41b5-a610-177457eada3c" 00:19:31.638 ], 00:19:31.638 "product_name": "Raid Volume", 00:19:31.638 "block_size": 512, 00:19:31.638 "num_blocks": 65536, 00:19:31.638 "uuid": "68b78e07-d84f-41b5-a610-177457eada3c", 00:19:31.638 "assigned_rate_limits": { 00:19:31.638 "rw_ios_per_sec": 0, 00:19:31.638 "rw_mbytes_per_sec": 0, 00:19:31.638 "r_mbytes_per_sec": 0, 00:19:31.638 "w_mbytes_per_sec": 0 00:19:31.638 }, 00:19:31.638 "claimed": false, 00:19:31.638 "zoned": false, 00:19:31.638 "supported_io_types": { 00:19:31.638 "read": true, 00:19:31.638 "write": true, 00:19:31.638 "unmap": false, 00:19:31.638 "write_zeroes": true, 00:19:31.638 "flush": false, 00:19:31.638 "reset": true, 00:19:31.638 "compare": false, 00:19:31.638 "compare_and_write": false, 00:19:31.638 "abort": false, 00:19:31.638 "nvme_admin": false, 00:19:31.638 "nvme_io": false 00:19:31.638 }, 00:19:31.638 "memory_domains": [ 00:19:31.638 { 00:19:31.638 "dma_device_id": "system", 00:19:31.638 "dma_device_type": 1 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.638 "dma_device_type": 2 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "dma_device_id": "system", 00:19:31.638 "dma_device_type": 1 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.638 "dma_device_type": 2 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "dma_device_id": "system", 00:19:31.638 "dma_device_type": 1 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.638 "dma_device_type": 2 00:19:31.638 } 00:19:31.638 ], 00:19:31.638 "driver_specific": { 00:19:31.638 "raid": { 00:19:31.638 "uuid": "68b78e07-d84f-41b5-a610-177457eada3c", 00:19:31.638 "strip_size_kb": 0, 00:19:31.638 "state": "online", 00:19:31.638 "raid_level": "raid1", 00:19:31.638 "superblock": false, 00:19:31.638 "num_base_bdevs": 3, 00:19:31.638 "num_base_bdevs_discovered": 3, 00:19:31.638 "num_base_bdevs_operational": 3, 00:19:31.638 "base_bdevs_list": [ 00:19:31.638 { 00:19:31.638 "name": "BaseBdev1", 00:19:31.638 "uuid": "b1a69507-3b1d-484b-9a26-be6dc0b9dd61", 00:19:31.638 "is_configured": true, 00:19:31.638 "data_offset": 0, 00:19:31.638 "data_size": 65536 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "name": "BaseBdev2", 00:19:31.638 "uuid": "90241fca-2729-47f1-98e3-d583ba7a98e8", 00:19:31.638 "is_configured": true, 00:19:31.638 "data_offset": 0, 00:19:31.638 "data_size": 65536 00:19:31.638 }, 00:19:31.638 { 00:19:31.638 "name": "BaseBdev3", 00:19:31.638 "uuid": "6470c021-fa1c-46f2-80f7-ef42a48e1d22", 00:19:31.638 "is_configured": true, 00:19:31.638 "data_offset": 0, 00:19:31.638 "data_size": 65536 00:19:31.638 } 00:19:31.638 ] 00:19:31.638 } 00:19:31.638 } 00:19:31.638 }' 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:19:31.638 BaseBdev2 00:19:31.638 BaseBdev3' 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:19:31.638 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:31.896 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:31.896 "name": "BaseBdev1", 00:19:31.896 "aliases": [ 00:19:31.896 "b1a69507-3b1d-484b-9a26-be6dc0b9dd61" 00:19:31.896 ], 00:19:31.896 "product_name": "Malloc disk", 00:19:31.896 "block_size": 512, 00:19:31.896 "num_blocks": 65536, 00:19:31.896 "uuid": "b1a69507-3b1d-484b-9a26-be6dc0b9dd61", 00:19:31.896 "assigned_rate_limits": { 00:19:31.896 "rw_ios_per_sec": 0, 00:19:31.896 "rw_mbytes_per_sec": 0, 00:19:31.896 "r_mbytes_per_sec": 0, 00:19:31.896 "w_mbytes_per_sec": 0 00:19:31.896 }, 00:19:31.896 "claimed": true, 00:19:31.896 "claim_type": "exclusive_write", 00:19:31.896 "zoned": false, 00:19:31.896 "supported_io_types": { 00:19:31.896 "read": true, 00:19:31.896 "write": true, 00:19:31.896 "unmap": true, 00:19:31.896 "write_zeroes": true, 00:19:31.896 "flush": true, 00:19:31.896 "reset": true, 00:19:31.896 "compare": false, 00:19:31.896 "compare_and_write": false, 00:19:31.896 "abort": true, 00:19:31.896 "nvme_admin": false, 00:19:31.896 "nvme_io": false 00:19:31.896 }, 00:19:31.896 "memory_domains": [ 00:19:31.896 { 00:19:31.896 "dma_device_id": "system", 00:19:31.896 "dma_device_type": 1 00:19:31.896 }, 00:19:31.896 { 00:19:31.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:31.896 "dma_device_type": 2 00:19:31.896 } 00:19:31.896 ], 00:19:31.896 "driver_specific": {} 00:19:31.896 }' 00:19:31.896 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.155 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.155 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.155 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.155 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.155 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.155 12:01:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.155 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.413 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.413 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.413 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:32.413 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:32.413 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:32.413 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:32.413 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:32.672 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:32.672 "name": "BaseBdev2", 00:19:32.672 "aliases": [ 00:19:32.672 "90241fca-2729-47f1-98e3-d583ba7a98e8" 00:19:32.672 ], 00:19:32.672 "product_name": "Malloc disk", 00:19:32.672 "block_size": 512, 00:19:32.672 "num_blocks": 65536, 00:19:32.672 "uuid": "90241fca-2729-47f1-98e3-d583ba7a98e8", 00:19:32.672 "assigned_rate_limits": { 00:19:32.672 "rw_ios_per_sec": 0, 00:19:32.672 "rw_mbytes_per_sec": 0, 00:19:32.672 "r_mbytes_per_sec": 0, 00:19:32.672 "w_mbytes_per_sec": 0 00:19:32.672 }, 00:19:32.672 "claimed": true, 00:19:32.672 "claim_type": "exclusive_write", 00:19:32.672 "zoned": false, 00:19:32.672 "supported_io_types": { 00:19:32.672 "read": true, 00:19:32.672 "write": true, 00:19:32.672 "unmap": true, 00:19:32.672 "write_zeroes": true, 00:19:32.672 "flush": true, 00:19:32.672 "reset": true, 00:19:32.672 "compare": false, 00:19:32.672 "compare_and_write": false, 00:19:32.672 "abort": true, 00:19:32.672 "nvme_admin": false, 00:19:32.672 "nvme_io": false 00:19:32.672 }, 00:19:32.672 "memory_domains": [ 00:19:32.672 { 00:19:32.672 "dma_device_id": "system", 00:19:32.672 "dma_device_type": 1 00:19:32.672 }, 00:19:32.672 { 00:19:32.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.672 "dma_device_type": 2 00:19:32.672 } 00:19:32.672 ], 00:19:32.672 "driver_specific": {} 00:19:32.672 }' 00:19:32.672 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.672 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:32.931 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.190 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.190 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.190 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:33.190 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:33.190 12:01:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:33.449 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:33.449 "name": "BaseBdev3", 00:19:33.449 "aliases": [ 00:19:33.449 "6470c021-fa1c-46f2-80f7-ef42a48e1d22" 00:19:33.449 ], 00:19:33.449 "product_name": "Malloc disk", 00:19:33.449 "block_size": 512, 00:19:33.449 "num_blocks": 65536, 00:19:33.449 "uuid": "6470c021-fa1c-46f2-80f7-ef42a48e1d22", 00:19:33.449 "assigned_rate_limits": { 00:19:33.449 "rw_ios_per_sec": 0, 00:19:33.449 "rw_mbytes_per_sec": 0, 00:19:33.449 "r_mbytes_per_sec": 0, 00:19:33.449 "w_mbytes_per_sec": 0 00:19:33.449 }, 00:19:33.449 "claimed": true, 00:19:33.449 "claim_type": "exclusive_write", 00:19:33.449 "zoned": false, 00:19:33.449 "supported_io_types": { 00:19:33.449 "read": true, 00:19:33.449 "write": true, 00:19:33.449 "unmap": true, 00:19:33.449 "write_zeroes": true, 00:19:33.449 "flush": true, 00:19:33.449 "reset": true, 00:19:33.449 "compare": false, 00:19:33.449 "compare_and_write": false, 00:19:33.449 "abort": true, 00:19:33.449 "nvme_admin": false, 00:19:33.449 "nvme_io": false 00:19:33.449 }, 00:19:33.449 "memory_domains": [ 00:19:33.449 { 00:19:33.449 "dma_device_id": "system", 00:19:33.449 "dma_device_type": 1 00:19:33.449 }, 00:19:33.449 { 00:19:33.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:33.449 "dma_device_type": 2 00:19:33.449 } 00:19:33.449 ], 00:19:33.449 "driver_specific": {} 00:19:33.449 }' 00:19:33.449 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.449 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:33.449 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:33.449 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.449 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:33.708 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:33.966 [2024-07-21 12:01:32.807366] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.224 12:01:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.483 12:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:34.483 "name": "Existed_Raid", 00:19:34.483 "uuid": "68b78e07-d84f-41b5-a610-177457eada3c", 00:19:34.483 "strip_size_kb": 0, 00:19:34.483 "state": "online", 00:19:34.483 "raid_level": "raid1", 00:19:34.483 "superblock": false, 00:19:34.483 "num_base_bdevs": 3, 00:19:34.483 "num_base_bdevs_discovered": 2, 00:19:34.483 "num_base_bdevs_operational": 2, 00:19:34.483 "base_bdevs_list": [ 00:19:34.483 { 00:19:34.483 "name": null, 00:19:34.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.483 "is_configured": false, 00:19:34.483 "data_offset": 0, 00:19:34.483 "data_size": 65536 00:19:34.483 }, 00:19:34.483 { 00:19:34.483 "name": "BaseBdev2", 00:19:34.483 "uuid": "90241fca-2729-47f1-98e3-d583ba7a98e8", 00:19:34.483 "is_configured": true, 00:19:34.483 "data_offset": 0, 00:19:34.483 "data_size": 65536 00:19:34.483 }, 00:19:34.483 { 00:19:34.483 "name": "BaseBdev3", 00:19:34.483 "uuid": "6470c021-fa1c-46f2-80f7-ef42a48e1d22", 00:19:34.483 "is_configured": true, 00:19:34.483 "data_offset": 0, 00:19:34.483 "data_size": 65536 00:19:34.483 } 00:19:34.483 ] 00:19:34.483 }' 00:19:34.483 12:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:34.483 12:01:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:35.048 12:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:19:35.048 12:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:35.048 12:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.048 12:01:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:35.310 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:35.310 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.310 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:35.567 [2024-07-21 12:01:34.288201] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:35.567 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:35.567 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:35.567 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.567 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:19:35.824 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:19:35.824 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:35.824 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:36.080 [2024-07-21 12:01:34.804153] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:36.080 [2024-07-21 12:01:34.804428] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:36.080 [2024-07-21 12:01:34.816767] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:36.080 [2024-07-21 12:01:34.817105] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:36.080 [2024-07-21 12:01:34.817222] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:19:36.080 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:19:36.080 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:19:36.080 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.080 12:01:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:19:36.368 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:19:36.368 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:19:36.368 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:19:36.368 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:19:36.368 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:36.368 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:36.625 BaseBdev2 00:19:36.625 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:19:36.625 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:36.625 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:36.625 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:36.625 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:36.625 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:36.625 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:36.883 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:37.141 [ 00:19:37.141 { 00:19:37.141 "name": "BaseBdev2", 00:19:37.141 "aliases": [ 00:19:37.141 "9bddfdfa-683c-4fb4-8a62-b333972d42ac" 00:19:37.141 ], 00:19:37.141 "product_name": "Malloc disk", 00:19:37.141 "block_size": 512, 00:19:37.141 "num_blocks": 65536, 00:19:37.141 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:37.141 "assigned_rate_limits": { 00:19:37.141 "rw_ios_per_sec": 0, 00:19:37.141 "rw_mbytes_per_sec": 0, 00:19:37.141 "r_mbytes_per_sec": 0, 00:19:37.141 "w_mbytes_per_sec": 0 00:19:37.141 }, 00:19:37.141 "claimed": false, 00:19:37.141 "zoned": false, 00:19:37.141 "supported_io_types": { 00:19:37.141 "read": true, 00:19:37.141 "write": true, 00:19:37.141 "unmap": true, 00:19:37.141 "write_zeroes": true, 00:19:37.141 "flush": true, 00:19:37.141 "reset": true, 00:19:37.141 "compare": false, 00:19:37.141 "compare_and_write": false, 00:19:37.141 "abort": true, 00:19:37.141 "nvme_admin": false, 00:19:37.141 "nvme_io": false 00:19:37.141 }, 00:19:37.141 "memory_domains": [ 00:19:37.141 { 00:19:37.141 "dma_device_id": "system", 00:19:37.141 "dma_device_type": 1 00:19:37.141 }, 00:19:37.141 { 00:19:37.141 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.141 "dma_device_type": 2 00:19:37.141 } 00:19:37.141 ], 00:19:37.141 "driver_specific": {} 00:19:37.141 } 00:19:37.141 ] 00:19:37.141 12:01:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:37.141 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:37.141 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:37.141 12:01:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:37.399 BaseBdev3 00:19:37.399 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:19:37.399 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:19:37.399 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:37.399 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:37.399 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:37.399 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:37.399 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:37.657 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:37.657 [ 00:19:37.657 { 00:19:37.657 "name": "BaseBdev3", 00:19:37.657 "aliases": [ 00:19:37.657 "4e691382-0f9b-4955-8230-9774bd2fac1d" 00:19:37.657 ], 00:19:37.657 "product_name": "Malloc disk", 00:19:37.657 "block_size": 512, 00:19:37.657 "num_blocks": 65536, 00:19:37.657 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:37.657 "assigned_rate_limits": { 00:19:37.657 "rw_ios_per_sec": 0, 00:19:37.657 "rw_mbytes_per_sec": 0, 00:19:37.657 "r_mbytes_per_sec": 0, 00:19:37.657 "w_mbytes_per_sec": 0 00:19:37.657 }, 00:19:37.657 "claimed": false, 00:19:37.657 "zoned": false, 00:19:37.657 "supported_io_types": { 00:19:37.657 "read": true, 00:19:37.657 "write": true, 00:19:37.657 "unmap": true, 00:19:37.657 "write_zeroes": true, 00:19:37.657 "flush": true, 00:19:37.657 "reset": true, 00:19:37.657 "compare": false, 00:19:37.657 "compare_and_write": false, 00:19:37.657 "abort": true, 00:19:37.657 "nvme_admin": false, 00:19:37.657 "nvme_io": false 00:19:37.657 }, 00:19:37.657 "memory_domains": [ 00:19:37.657 { 00:19:37.657 "dma_device_id": "system", 00:19:37.657 "dma_device_type": 1 00:19:37.657 }, 00:19:37.657 { 00:19:37.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.657 "dma_device_type": 2 00:19:37.657 } 00:19:37.657 ], 00:19:37.657 "driver_specific": {} 00:19:37.657 } 00:19:37.657 ] 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:37.919 [2024-07-21 12:01:36.739274] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:37.919 [2024-07-21 12:01:36.739578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:37.919 [2024-07-21 12:01:36.739723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:37.919 [2024-07-21 12:01:36.741933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.919 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:38.177 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:38.177 "name": "Existed_Raid", 00:19:38.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.177 "strip_size_kb": 0, 00:19:38.177 "state": "configuring", 00:19:38.177 "raid_level": "raid1", 00:19:38.177 "superblock": false, 00:19:38.177 "num_base_bdevs": 3, 00:19:38.177 "num_base_bdevs_discovered": 2, 00:19:38.177 "num_base_bdevs_operational": 3, 00:19:38.177 "base_bdevs_list": [ 00:19:38.177 { 00:19:38.177 "name": "BaseBdev1", 00:19:38.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.177 "is_configured": false, 00:19:38.177 "data_offset": 0, 00:19:38.177 "data_size": 0 00:19:38.177 }, 00:19:38.177 { 00:19:38.177 "name": "BaseBdev2", 00:19:38.177 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:38.177 "is_configured": true, 00:19:38.177 "data_offset": 0, 00:19:38.177 "data_size": 65536 00:19:38.177 }, 00:19:38.177 { 00:19:38.177 "name": "BaseBdev3", 00:19:38.177 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:38.177 "is_configured": true, 00:19:38.177 "data_offset": 0, 00:19:38.177 "data_size": 65536 00:19:38.177 } 00:19:38.177 ] 00:19:38.177 }' 00:19:38.177 12:01:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:38.177 12:01:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:19:39.112 [2024-07-21 12:01:37.875571] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:39.112 12:01:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:39.370 12:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:39.370 "name": "Existed_Raid", 00:19:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.370 "strip_size_kb": 0, 00:19:39.370 "state": "configuring", 00:19:39.370 "raid_level": "raid1", 00:19:39.370 "superblock": false, 00:19:39.370 "num_base_bdevs": 3, 00:19:39.370 "num_base_bdevs_discovered": 1, 00:19:39.370 "num_base_bdevs_operational": 3, 00:19:39.370 "base_bdevs_list": [ 00:19:39.370 { 00:19:39.370 "name": "BaseBdev1", 00:19:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:39.370 "is_configured": false, 00:19:39.370 "data_offset": 0, 00:19:39.370 "data_size": 0 00:19:39.370 }, 00:19:39.370 { 00:19:39.370 "name": null, 00:19:39.370 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:39.370 "is_configured": false, 00:19:39.370 "data_offset": 0, 00:19:39.370 "data_size": 65536 00:19:39.370 }, 00:19:39.370 { 00:19:39.370 "name": "BaseBdev3", 00:19:39.370 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:39.370 "is_configured": true, 00:19:39.370 "data_offset": 0, 00:19:39.370 "data_size": 65536 00:19:39.370 } 00:19:39.370 ] 00:19:39.370 }' 00:19:39.370 12:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:39.370 12:01:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:40.304 12:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:40.304 12:01:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:40.304 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:19:40.304 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:40.562 [2024-07-21 12:01:39.336464] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.562 BaseBdev1 00:19:40.562 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:19:40.562 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:40.562 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:40.562 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:40.562 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:40.562 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:40.562 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:40.820 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:41.078 [ 00:19:41.078 { 00:19:41.078 "name": "BaseBdev1", 00:19:41.078 "aliases": [ 00:19:41.078 "ce06d74c-7be7-424e-84ae-6225de387208" 00:19:41.078 ], 00:19:41.078 "product_name": "Malloc disk", 00:19:41.078 "block_size": 512, 00:19:41.078 "num_blocks": 65536, 00:19:41.078 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:41.078 "assigned_rate_limits": { 00:19:41.078 "rw_ios_per_sec": 0, 00:19:41.078 "rw_mbytes_per_sec": 0, 00:19:41.078 "r_mbytes_per_sec": 0, 00:19:41.078 "w_mbytes_per_sec": 0 00:19:41.078 }, 00:19:41.078 "claimed": true, 00:19:41.078 "claim_type": "exclusive_write", 00:19:41.078 "zoned": false, 00:19:41.078 "supported_io_types": { 00:19:41.078 "read": true, 00:19:41.078 "write": true, 00:19:41.078 "unmap": true, 00:19:41.078 "write_zeroes": true, 00:19:41.078 "flush": true, 00:19:41.078 "reset": true, 00:19:41.078 "compare": false, 00:19:41.078 "compare_and_write": false, 00:19:41.078 "abort": true, 00:19:41.078 "nvme_admin": false, 00:19:41.078 "nvme_io": false 00:19:41.078 }, 00:19:41.078 "memory_domains": [ 00:19:41.078 { 00:19:41.078 "dma_device_id": "system", 00:19:41.078 "dma_device_type": 1 00:19:41.078 }, 00:19:41.078 { 00:19:41.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.078 "dma_device_type": 2 00:19:41.078 } 00:19:41.078 ], 00:19:41.078 "driver_specific": {} 00:19:41.078 } 00:19:41.078 ] 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.078 12:01:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:41.338 12:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:41.338 "name": "Existed_Raid", 00:19:41.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:41.338 "strip_size_kb": 0, 00:19:41.338 "state": "configuring", 00:19:41.338 "raid_level": "raid1", 00:19:41.338 "superblock": false, 00:19:41.338 "num_base_bdevs": 3, 00:19:41.338 "num_base_bdevs_discovered": 2, 00:19:41.338 "num_base_bdevs_operational": 3, 00:19:41.338 "base_bdevs_list": [ 00:19:41.338 { 00:19:41.338 "name": "BaseBdev1", 00:19:41.338 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:41.338 "is_configured": true, 00:19:41.338 "data_offset": 0, 00:19:41.338 "data_size": 65536 00:19:41.338 }, 00:19:41.338 { 00:19:41.338 "name": null, 00:19:41.338 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:41.338 "is_configured": false, 00:19:41.338 "data_offset": 0, 00:19:41.338 "data_size": 65536 00:19:41.338 }, 00:19:41.338 { 00:19:41.338 "name": "BaseBdev3", 00:19:41.338 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:41.338 "is_configured": true, 00:19:41.338 "data_offset": 0, 00:19:41.338 "data_size": 65536 00:19:41.338 } 00:19:41.338 ] 00:19:41.338 }' 00:19:41.338 12:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:41.338 12:01:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:41.905 12:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.905 12:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:42.162 12:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:19:42.162 12:01:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:19:42.420 [2024-07-21 12:01:41.209017] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:42.420 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:42.420 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:42.420 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:42.420 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:42.420 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:42.420 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:42.420 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:42.421 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:42.421 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:42.421 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:42.421 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.421 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:42.679 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:42.679 "name": "Existed_Raid", 00:19:42.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:42.679 "strip_size_kb": 0, 00:19:42.679 "state": "configuring", 00:19:42.679 "raid_level": "raid1", 00:19:42.679 "superblock": false, 00:19:42.679 "num_base_bdevs": 3, 00:19:42.679 "num_base_bdevs_discovered": 1, 00:19:42.679 "num_base_bdevs_operational": 3, 00:19:42.679 "base_bdevs_list": [ 00:19:42.679 { 00:19:42.679 "name": "BaseBdev1", 00:19:42.679 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:42.679 "is_configured": true, 00:19:42.679 "data_offset": 0, 00:19:42.679 "data_size": 65536 00:19:42.679 }, 00:19:42.679 { 00:19:42.679 "name": null, 00:19:42.679 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:42.679 "is_configured": false, 00:19:42.679 "data_offset": 0, 00:19:42.679 "data_size": 65536 00:19:42.679 }, 00:19:42.679 { 00:19:42.679 "name": null, 00:19:42.679 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:42.679 "is_configured": false, 00:19:42.679 "data_offset": 0, 00:19:42.679 "data_size": 65536 00:19:42.679 } 00:19:42.679 ] 00:19:42.679 }' 00:19:42.679 12:01:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:42.679 12:01:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:43.613 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.613 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:43.613 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:19:43.613 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:19:43.872 [2024-07-21 12:01:42.669413] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.872 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:44.131 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:44.131 "name": "Existed_Raid", 00:19:44.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.131 "strip_size_kb": 0, 00:19:44.131 "state": "configuring", 00:19:44.131 "raid_level": "raid1", 00:19:44.131 "superblock": false, 00:19:44.131 "num_base_bdevs": 3, 00:19:44.131 "num_base_bdevs_discovered": 2, 00:19:44.131 "num_base_bdevs_operational": 3, 00:19:44.131 "base_bdevs_list": [ 00:19:44.131 { 00:19:44.131 "name": "BaseBdev1", 00:19:44.131 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:44.131 "is_configured": true, 00:19:44.131 "data_offset": 0, 00:19:44.131 "data_size": 65536 00:19:44.131 }, 00:19:44.131 { 00:19:44.131 "name": null, 00:19:44.131 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:44.131 "is_configured": false, 00:19:44.131 "data_offset": 0, 00:19:44.131 "data_size": 65536 00:19:44.131 }, 00:19:44.131 { 00:19:44.131 "name": "BaseBdev3", 00:19:44.131 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:44.131 "is_configured": true, 00:19:44.131 "data_offset": 0, 00:19:44.131 "data_size": 65536 00:19:44.131 } 00:19:44.131 ] 00:19:44.131 }' 00:19:44.131 12:01:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:44.131 12:01:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:45.066 12:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.066 12:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:19:45.066 12:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:19:45.066 12:01:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:45.323 [2024-07-21 12:01:44.069754] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:45.323 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:45.581 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:45.581 "name": "Existed_Raid", 00:19:45.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:45.581 "strip_size_kb": 0, 00:19:45.581 "state": "configuring", 00:19:45.581 "raid_level": "raid1", 00:19:45.581 "superblock": false, 00:19:45.581 "num_base_bdevs": 3, 00:19:45.581 "num_base_bdevs_discovered": 1, 00:19:45.581 "num_base_bdevs_operational": 3, 00:19:45.581 "base_bdevs_list": [ 00:19:45.581 { 00:19:45.581 "name": null, 00:19:45.581 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:45.581 "is_configured": false, 00:19:45.581 "data_offset": 0, 00:19:45.581 "data_size": 65536 00:19:45.581 }, 00:19:45.581 { 00:19:45.581 "name": null, 00:19:45.581 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:45.581 "is_configured": false, 00:19:45.581 "data_offset": 0, 00:19:45.581 "data_size": 65536 00:19:45.581 }, 00:19:45.581 { 00:19:45.581 "name": "BaseBdev3", 00:19:45.581 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:45.581 "is_configured": true, 00:19:45.581 "data_offset": 0, 00:19:45.581 "data_size": 65536 00:19:45.581 } 00:19:45.581 ] 00:19:45.581 }' 00:19:45.581 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:45.581 12:01:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:46.146 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.146 12:01:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:19:46.404 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:19:46.404 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:19:46.662 [2024-07-21 12:01:45.392225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.662 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:46.921 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:46.921 "name": "Existed_Raid", 00:19:46.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:46.921 "strip_size_kb": 0, 00:19:46.921 "state": "configuring", 00:19:46.921 "raid_level": "raid1", 00:19:46.921 "superblock": false, 00:19:46.921 "num_base_bdevs": 3, 00:19:46.921 "num_base_bdevs_discovered": 2, 00:19:46.921 "num_base_bdevs_operational": 3, 00:19:46.921 "base_bdevs_list": [ 00:19:46.921 { 00:19:46.921 "name": null, 00:19:46.921 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:46.921 "is_configured": false, 00:19:46.921 "data_offset": 0, 00:19:46.921 "data_size": 65536 00:19:46.921 }, 00:19:46.921 { 00:19:46.921 "name": "BaseBdev2", 00:19:46.921 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:46.921 "is_configured": true, 00:19:46.921 "data_offset": 0, 00:19:46.921 "data_size": 65536 00:19:46.921 }, 00:19:46.921 { 00:19:46.921 "name": "BaseBdev3", 00:19:46.921 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:46.921 "is_configured": true, 00:19:46.921 "data_offset": 0, 00:19:46.921 "data_size": 65536 00:19:46.921 } 00:19:46.921 ] 00:19:46.921 }' 00:19:46.921 12:01:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:46.921 12:01:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:47.557 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.557 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:19:47.825 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:19:47.825 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.825 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:19:48.083 12:01:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ce06d74c-7be7-424e-84ae-6225de387208 00:19:48.342 [2024-07-21 12:01:47.133243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:19:48.342 [2024-07-21 12:01:47.133585] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:19:48.342 [2024-07-21 12:01:47.133634] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:48.342 [2024-07-21 12:01:47.133835] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:19:48.342 [2024-07-21 12:01:47.134303] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:19:48.342 [2024-07-21 12:01:47.134471] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:19:48.342 [2024-07-21 12:01:47.134825] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:48.342 NewBaseBdev 00:19:48.342 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:19:48.342 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:19:48.342 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:48.342 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:19:48.342 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:48.342 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:48.342 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:48.601 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:19:48.859 [ 00:19:48.859 { 00:19:48.859 "name": "NewBaseBdev", 00:19:48.859 "aliases": [ 00:19:48.859 "ce06d74c-7be7-424e-84ae-6225de387208" 00:19:48.859 ], 00:19:48.859 "product_name": "Malloc disk", 00:19:48.859 "block_size": 512, 00:19:48.859 "num_blocks": 65536, 00:19:48.859 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:48.859 "assigned_rate_limits": { 00:19:48.859 "rw_ios_per_sec": 0, 00:19:48.859 "rw_mbytes_per_sec": 0, 00:19:48.859 "r_mbytes_per_sec": 0, 00:19:48.859 "w_mbytes_per_sec": 0 00:19:48.859 }, 00:19:48.859 "claimed": true, 00:19:48.859 "claim_type": "exclusive_write", 00:19:48.859 "zoned": false, 00:19:48.859 "supported_io_types": { 00:19:48.859 "read": true, 00:19:48.859 "write": true, 00:19:48.859 "unmap": true, 00:19:48.859 "write_zeroes": true, 00:19:48.859 "flush": true, 00:19:48.859 "reset": true, 00:19:48.859 "compare": false, 00:19:48.859 "compare_and_write": false, 00:19:48.859 "abort": true, 00:19:48.859 "nvme_admin": false, 00:19:48.859 "nvme_io": false 00:19:48.859 }, 00:19:48.859 "memory_domains": [ 00:19:48.859 { 00:19:48.859 "dma_device_id": "system", 00:19:48.859 "dma_device_type": 1 00:19:48.859 }, 00:19:48.859 { 00:19:48.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:48.859 "dma_device_type": 2 00:19:48.859 } 00:19:48.859 ], 00:19:48.859 "driver_specific": {} 00:19:48.859 } 00:19:48.859 ] 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:48.859 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.117 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:49.117 "name": "Existed_Raid", 00:19:49.117 "uuid": "50806c97-ab4e-492f-a595-4fdff0f7a4af", 00:19:49.117 "strip_size_kb": 0, 00:19:49.117 "state": "online", 00:19:49.117 "raid_level": "raid1", 00:19:49.117 "superblock": false, 00:19:49.117 "num_base_bdevs": 3, 00:19:49.117 "num_base_bdevs_discovered": 3, 00:19:49.117 "num_base_bdevs_operational": 3, 00:19:49.117 "base_bdevs_list": [ 00:19:49.117 { 00:19:49.117 "name": "NewBaseBdev", 00:19:49.117 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:49.117 "is_configured": true, 00:19:49.117 "data_offset": 0, 00:19:49.117 "data_size": 65536 00:19:49.117 }, 00:19:49.117 { 00:19:49.117 "name": "BaseBdev2", 00:19:49.117 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:49.117 "is_configured": true, 00:19:49.117 "data_offset": 0, 00:19:49.117 "data_size": 65536 00:19:49.117 }, 00:19:49.117 { 00:19:49.117 "name": "BaseBdev3", 00:19:49.117 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:49.117 "is_configured": true, 00:19:49.117 "data_offset": 0, 00:19:49.117 "data_size": 65536 00:19:49.117 } 00:19:49.117 ] 00:19:49.117 }' 00:19:49.117 12:01:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:49.117 12:01:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:19:50.051 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:19:50.051 [2024-07-21 12:01:48.914004] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:50.308 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:19:50.308 "name": "Existed_Raid", 00:19:50.308 "aliases": [ 00:19:50.308 "50806c97-ab4e-492f-a595-4fdff0f7a4af" 00:19:50.308 ], 00:19:50.308 "product_name": "Raid Volume", 00:19:50.308 "block_size": 512, 00:19:50.308 "num_blocks": 65536, 00:19:50.308 "uuid": "50806c97-ab4e-492f-a595-4fdff0f7a4af", 00:19:50.308 "assigned_rate_limits": { 00:19:50.308 "rw_ios_per_sec": 0, 00:19:50.308 "rw_mbytes_per_sec": 0, 00:19:50.308 "r_mbytes_per_sec": 0, 00:19:50.308 "w_mbytes_per_sec": 0 00:19:50.308 }, 00:19:50.308 "claimed": false, 00:19:50.308 "zoned": false, 00:19:50.308 "supported_io_types": { 00:19:50.308 "read": true, 00:19:50.308 "write": true, 00:19:50.308 "unmap": false, 00:19:50.308 "write_zeroes": true, 00:19:50.308 "flush": false, 00:19:50.308 "reset": true, 00:19:50.308 "compare": false, 00:19:50.308 "compare_and_write": false, 00:19:50.308 "abort": false, 00:19:50.308 "nvme_admin": false, 00:19:50.308 "nvme_io": false 00:19:50.308 }, 00:19:50.308 "memory_domains": [ 00:19:50.308 { 00:19:50.308 "dma_device_id": "system", 00:19:50.308 "dma_device_type": 1 00:19:50.308 }, 00:19:50.308 { 00:19:50.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.309 "dma_device_type": 2 00:19:50.309 }, 00:19:50.309 { 00:19:50.309 "dma_device_id": "system", 00:19:50.309 "dma_device_type": 1 00:19:50.309 }, 00:19:50.309 { 00:19:50.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.309 "dma_device_type": 2 00:19:50.309 }, 00:19:50.309 { 00:19:50.309 "dma_device_id": "system", 00:19:50.309 "dma_device_type": 1 00:19:50.309 }, 00:19:50.309 { 00:19:50.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.309 "dma_device_type": 2 00:19:50.309 } 00:19:50.309 ], 00:19:50.309 "driver_specific": { 00:19:50.309 "raid": { 00:19:50.309 "uuid": "50806c97-ab4e-492f-a595-4fdff0f7a4af", 00:19:50.309 "strip_size_kb": 0, 00:19:50.309 "state": "online", 00:19:50.309 "raid_level": "raid1", 00:19:50.309 "superblock": false, 00:19:50.309 "num_base_bdevs": 3, 00:19:50.309 "num_base_bdevs_discovered": 3, 00:19:50.309 "num_base_bdevs_operational": 3, 00:19:50.309 "base_bdevs_list": [ 00:19:50.309 { 00:19:50.309 "name": "NewBaseBdev", 00:19:50.309 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:50.309 "is_configured": true, 00:19:50.309 "data_offset": 0, 00:19:50.309 "data_size": 65536 00:19:50.309 }, 00:19:50.309 { 00:19:50.309 "name": "BaseBdev2", 00:19:50.309 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:50.309 "is_configured": true, 00:19:50.309 "data_offset": 0, 00:19:50.309 "data_size": 65536 00:19:50.309 }, 00:19:50.309 { 00:19:50.309 "name": "BaseBdev3", 00:19:50.309 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:50.309 "is_configured": true, 00:19:50.309 "data_offset": 0, 00:19:50.309 "data_size": 65536 00:19:50.309 } 00:19:50.309 ] 00:19:50.309 } 00:19:50.309 } 00:19:50.309 }' 00:19:50.309 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:50.309 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:19:50.309 BaseBdev2 00:19:50.309 BaseBdev3' 00:19:50.309 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.309 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:19:50.309 12:01:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:50.566 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:50.566 "name": "NewBaseBdev", 00:19:50.566 "aliases": [ 00:19:50.566 "ce06d74c-7be7-424e-84ae-6225de387208" 00:19:50.566 ], 00:19:50.566 "product_name": "Malloc disk", 00:19:50.566 "block_size": 512, 00:19:50.566 "num_blocks": 65536, 00:19:50.566 "uuid": "ce06d74c-7be7-424e-84ae-6225de387208", 00:19:50.566 "assigned_rate_limits": { 00:19:50.566 "rw_ios_per_sec": 0, 00:19:50.566 "rw_mbytes_per_sec": 0, 00:19:50.566 "r_mbytes_per_sec": 0, 00:19:50.566 "w_mbytes_per_sec": 0 00:19:50.566 }, 00:19:50.566 "claimed": true, 00:19:50.566 "claim_type": "exclusive_write", 00:19:50.566 "zoned": false, 00:19:50.566 "supported_io_types": { 00:19:50.566 "read": true, 00:19:50.566 "write": true, 00:19:50.566 "unmap": true, 00:19:50.566 "write_zeroes": true, 00:19:50.566 "flush": true, 00:19:50.566 "reset": true, 00:19:50.566 "compare": false, 00:19:50.566 "compare_and_write": false, 00:19:50.566 "abort": true, 00:19:50.566 "nvme_admin": false, 00:19:50.566 "nvme_io": false 00:19:50.566 }, 00:19:50.566 "memory_domains": [ 00:19:50.566 { 00:19:50.566 "dma_device_id": "system", 00:19:50.566 "dma_device_type": 1 00:19:50.566 }, 00:19:50.566 { 00:19:50.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.566 "dma_device_type": 2 00:19:50.566 } 00:19:50.566 ], 00:19:50.566 "driver_specific": {} 00:19:50.566 }' 00:19:50.566 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.566 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:50.566 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:50.566 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.566 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:19:50.824 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:51.112 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:51.112 "name": "BaseBdev2", 00:19:51.112 "aliases": [ 00:19:51.112 "9bddfdfa-683c-4fb4-8a62-b333972d42ac" 00:19:51.112 ], 00:19:51.112 "product_name": "Malloc disk", 00:19:51.112 "block_size": 512, 00:19:51.112 "num_blocks": 65536, 00:19:51.112 "uuid": "9bddfdfa-683c-4fb4-8a62-b333972d42ac", 00:19:51.112 "assigned_rate_limits": { 00:19:51.112 "rw_ios_per_sec": 0, 00:19:51.112 "rw_mbytes_per_sec": 0, 00:19:51.112 "r_mbytes_per_sec": 0, 00:19:51.112 "w_mbytes_per_sec": 0 00:19:51.112 }, 00:19:51.112 "claimed": true, 00:19:51.112 "claim_type": "exclusive_write", 00:19:51.112 "zoned": false, 00:19:51.112 "supported_io_types": { 00:19:51.112 "read": true, 00:19:51.112 "write": true, 00:19:51.112 "unmap": true, 00:19:51.112 "write_zeroes": true, 00:19:51.112 "flush": true, 00:19:51.112 "reset": true, 00:19:51.112 "compare": false, 00:19:51.112 "compare_and_write": false, 00:19:51.112 "abort": true, 00:19:51.112 "nvme_admin": false, 00:19:51.112 "nvme_io": false 00:19:51.112 }, 00:19:51.112 "memory_domains": [ 00:19:51.112 { 00:19:51.112 "dma_device_id": "system", 00:19:51.112 "dma_device_type": 1 00:19:51.112 }, 00:19:51.112 { 00:19:51.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.112 "dma_device_type": 2 00:19:51.112 } 00:19:51.112 ], 00:19:51.112 "driver_specific": {} 00:19:51.112 }' 00:19:51.112 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.112 12:01:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:51.370 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.628 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:51.628 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:51.628 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:19:51.628 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:19:51.628 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:19:51.886 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:19:51.886 "name": "BaseBdev3", 00:19:51.886 "aliases": [ 00:19:51.886 "4e691382-0f9b-4955-8230-9774bd2fac1d" 00:19:51.886 ], 00:19:51.886 "product_name": "Malloc disk", 00:19:51.886 "block_size": 512, 00:19:51.886 "num_blocks": 65536, 00:19:51.886 "uuid": "4e691382-0f9b-4955-8230-9774bd2fac1d", 00:19:51.886 "assigned_rate_limits": { 00:19:51.886 "rw_ios_per_sec": 0, 00:19:51.886 "rw_mbytes_per_sec": 0, 00:19:51.886 "r_mbytes_per_sec": 0, 00:19:51.886 "w_mbytes_per_sec": 0 00:19:51.886 }, 00:19:51.886 "claimed": true, 00:19:51.886 "claim_type": "exclusive_write", 00:19:51.886 "zoned": false, 00:19:51.886 "supported_io_types": { 00:19:51.886 "read": true, 00:19:51.886 "write": true, 00:19:51.886 "unmap": true, 00:19:51.886 "write_zeroes": true, 00:19:51.886 "flush": true, 00:19:51.886 "reset": true, 00:19:51.886 "compare": false, 00:19:51.886 "compare_and_write": false, 00:19:51.886 "abort": true, 00:19:51.886 "nvme_admin": false, 00:19:51.886 "nvme_io": false 00:19:51.886 }, 00:19:51.886 "memory_domains": [ 00:19:51.886 { 00:19:51.886 "dma_device_id": "system", 00:19:51.886 "dma_device_type": 1 00:19:51.886 }, 00:19:51.886 { 00:19:51.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:51.886 "dma_device_type": 2 00:19:51.886 } 00:19:51.886 ], 00:19:51.886 "driver_specific": {} 00:19:51.886 }' 00:19:51.886 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.886 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:19:51.886 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:19:51.886 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:51.886 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:19:52.144 12:01:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:52.402 [2024-07-21 12:01:51.234192] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:52.402 [2024-07-21 12:01:51.234481] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:52.402 [2024-07-21 12:01:51.234704] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:52.402 [2024-07-21 12:01:51.235081] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:52.402 [2024-07-21 12:01:51.235230] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:19:52.402 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 141509 00:19:52.402 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 141509 ']' 00:19:52.403 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 141509 00:19:52.403 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:19:52.403 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:52.403 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 141509 00:19:52.661 killing process with pid 141509 00:19:52.661 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:52.661 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:52.661 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 141509' 00:19:52.661 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 141509 00:19:52.661 [2024-07-21 12:01:51.279200] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:52.661 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 141509 00:19:52.661 [2024-07-21 12:01:51.308759] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:52.920 ************************************ 00:19:52.920 END TEST raid_state_function_test 00:19:52.920 ************************************ 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:19:52.920 00:19:52.920 real 0m30.676s 00:19:52.920 user 0m58.493s 00:19:52.920 sys 0m3.588s 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:19:52.920 12:01:51 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:19:52.920 12:01:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:19:52.920 12:01:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:52.920 12:01:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:52.920 ************************************ 00:19:52.920 START TEST raid_state_function_test_sb 00:19:52.920 ************************************ 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 3 true 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=142500 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 142500' 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:52.920 Process raid pid: 142500 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 142500 /var/tmp/spdk-raid.sock 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 142500 ']' 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:52.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:52.920 12:01:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:52.920 [2024-07-21 12:01:51.682797] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:52.920 [2024-07-21 12:01:51.684009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.178 [2024-07-21 12:01:51.854284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.178 [2024-07-21 12:01:51.951353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.178 [2024-07-21 12:01:52.009547] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:53.746 12:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:53.746 12:01:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:19:53.746 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:54.005 [2024-07-21 12:01:52.851970] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:54.005 [2024-07-21 12:01:52.852432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:54.005 [2024-07-21 12:01:52.852574] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:54.005 [2024-07-21 12:01:52.852654] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:54.005 [2024-07-21 12:01:52.852866] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:54.005 [2024-07-21 12:01:52.852972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.264 12:01:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:54.529 12:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:54.529 "name": "Existed_Raid", 00:19:54.529 "uuid": "00b3d2eb-fd6e-4c95-bf61-12598f0df999", 00:19:54.529 "strip_size_kb": 0, 00:19:54.529 "state": "configuring", 00:19:54.529 "raid_level": "raid1", 00:19:54.529 "superblock": true, 00:19:54.529 "num_base_bdevs": 3, 00:19:54.529 "num_base_bdevs_discovered": 0, 00:19:54.529 "num_base_bdevs_operational": 3, 00:19:54.529 "base_bdevs_list": [ 00:19:54.529 { 00:19:54.529 "name": "BaseBdev1", 00:19:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.529 "is_configured": false, 00:19:54.529 "data_offset": 0, 00:19:54.529 "data_size": 0 00:19:54.529 }, 00:19:54.529 { 00:19:54.529 "name": "BaseBdev2", 00:19:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.529 "is_configured": false, 00:19:54.529 "data_offset": 0, 00:19:54.529 "data_size": 0 00:19:54.529 }, 00:19:54.529 { 00:19:54.529 "name": "BaseBdev3", 00:19:54.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:54.529 "is_configured": false, 00:19:54.529 "data_offset": 0, 00:19:54.529 "data_size": 0 00:19:54.529 } 00:19:54.529 ] 00:19:54.529 }' 00:19:54.529 12:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:54.529 12:01:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:55.095 12:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:55.095 [2024-07-21 12:01:53.952063] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:55.095 [2024-07-21 12:01:53.952350] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:19:55.353 12:01:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:55.353 [2024-07-21 12:01:54.168148] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:55.353 [2024-07-21 12:01:54.168491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:55.353 [2024-07-21 12:01:54.168675] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:55.353 [2024-07-21 12:01:54.168838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:55.353 [2024-07-21 12:01:54.168967] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:55.353 [2024-07-21 12:01:54.169039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:55.353 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:55.611 [2024-07-21 12:01:54.403293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:55.611 BaseBdev1 00:19:55.611 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:19:55.611 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:19:55.611 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:55.611 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:55.611 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:55.611 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:55.611 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:55.869 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:56.127 [ 00:19:56.127 { 00:19:56.127 "name": "BaseBdev1", 00:19:56.127 "aliases": [ 00:19:56.127 "e513ee0b-dd7f-4266-a0d7-be0a53e89584" 00:19:56.127 ], 00:19:56.127 "product_name": "Malloc disk", 00:19:56.127 "block_size": 512, 00:19:56.127 "num_blocks": 65536, 00:19:56.127 "uuid": "e513ee0b-dd7f-4266-a0d7-be0a53e89584", 00:19:56.127 "assigned_rate_limits": { 00:19:56.127 "rw_ios_per_sec": 0, 00:19:56.127 "rw_mbytes_per_sec": 0, 00:19:56.127 "r_mbytes_per_sec": 0, 00:19:56.127 "w_mbytes_per_sec": 0 00:19:56.127 }, 00:19:56.127 "claimed": true, 00:19:56.127 "claim_type": "exclusive_write", 00:19:56.127 "zoned": false, 00:19:56.127 "supported_io_types": { 00:19:56.127 "read": true, 00:19:56.127 "write": true, 00:19:56.127 "unmap": true, 00:19:56.127 "write_zeroes": true, 00:19:56.127 "flush": true, 00:19:56.127 "reset": true, 00:19:56.127 "compare": false, 00:19:56.127 "compare_and_write": false, 00:19:56.127 "abort": true, 00:19:56.127 "nvme_admin": false, 00:19:56.127 "nvme_io": false 00:19:56.127 }, 00:19:56.127 "memory_domains": [ 00:19:56.127 { 00:19:56.127 "dma_device_id": "system", 00:19:56.127 "dma_device_type": 1 00:19:56.127 }, 00:19:56.127 { 00:19:56.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:56.127 "dma_device_type": 2 00:19:56.127 } 00:19:56.127 ], 00:19:56.127 "driver_specific": {} 00:19:56.127 } 00:19:56.127 ] 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:56.127 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.128 12:01:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:56.385 12:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:56.385 "name": "Existed_Raid", 00:19:56.385 "uuid": "bf7396b1-0079-4abb-99e6-2700923185ab", 00:19:56.385 "strip_size_kb": 0, 00:19:56.385 "state": "configuring", 00:19:56.385 "raid_level": "raid1", 00:19:56.385 "superblock": true, 00:19:56.385 "num_base_bdevs": 3, 00:19:56.385 "num_base_bdevs_discovered": 1, 00:19:56.385 "num_base_bdevs_operational": 3, 00:19:56.385 "base_bdevs_list": [ 00:19:56.385 { 00:19:56.385 "name": "BaseBdev1", 00:19:56.385 "uuid": "e513ee0b-dd7f-4266-a0d7-be0a53e89584", 00:19:56.385 "is_configured": true, 00:19:56.385 "data_offset": 2048, 00:19:56.385 "data_size": 63488 00:19:56.385 }, 00:19:56.385 { 00:19:56.385 "name": "BaseBdev2", 00:19:56.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.385 "is_configured": false, 00:19:56.385 "data_offset": 0, 00:19:56.385 "data_size": 0 00:19:56.385 }, 00:19:56.385 { 00:19:56.385 "name": "BaseBdev3", 00:19:56.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:56.385 "is_configured": false, 00:19:56.385 "data_offset": 0, 00:19:56.385 "data_size": 0 00:19:56.385 } 00:19:56.385 ] 00:19:56.385 }' 00:19:56.385 12:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:56.385 12:01:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:56.951 12:01:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:57.209 [2024-07-21 12:01:56.035777] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:57.209 [2024-07-21 12:01:56.036118] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:57.209 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:19:57.467 [2024-07-21 12:01:56.303886] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:57.467 [2024-07-21 12:01:56.306263] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:57.467 [2024-07-21 12:01:56.306472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:57.467 [2024-07-21 12:01:56.306653] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:57.467 [2024-07-21 12:01:56.306748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.467 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:58.031 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:58.031 "name": "Existed_Raid", 00:19:58.031 "uuid": "36106845-a3ea-4743-832b-132dcd3a0a97", 00:19:58.031 "strip_size_kb": 0, 00:19:58.031 "state": "configuring", 00:19:58.031 "raid_level": "raid1", 00:19:58.031 "superblock": true, 00:19:58.031 "num_base_bdevs": 3, 00:19:58.031 "num_base_bdevs_discovered": 1, 00:19:58.031 "num_base_bdevs_operational": 3, 00:19:58.031 "base_bdevs_list": [ 00:19:58.031 { 00:19:58.031 "name": "BaseBdev1", 00:19:58.031 "uuid": "e513ee0b-dd7f-4266-a0d7-be0a53e89584", 00:19:58.031 "is_configured": true, 00:19:58.031 "data_offset": 2048, 00:19:58.031 "data_size": 63488 00:19:58.031 }, 00:19:58.031 { 00:19:58.031 "name": "BaseBdev2", 00:19:58.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.031 "is_configured": false, 00:19:58.031 "data_offset": 0, 00:19:58.031 "data_size": 0 00:19:58.031 }, 00:19:58.031 { 00:19:58.031 "name": "BaseBdev3", 00:19:58.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:58.032 "is_configured": false, 00:19:58.032 "data_offset": 0, 00:19:58.032 "data_size": 0 00:19:58.032 } 00:19:58.032 ] 00:19:58.032 }' 00:19:58.032 12:01:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:58.032 12:01:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:58.597 12:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:58.854 [2024-07-21 12:01:57.542531] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:58.854 BaseBdev2 00:19:58.854 12:01:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:19:58.854 12:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:19:58.854 12:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:19:58.854 12:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:19:58.854 12:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:19:58.854 12:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:19:58.854 12:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:59.112 12:01:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:59.370 [ 00:19:59.370 { 00:19:59.370 "name": "BaseBdev2", 00:19:59.370 "aliases": [ 00:19:59.370 "a81f259a-f85a-41ca-adb2-36fcf5bd761e" 00:19:59.370 ], 00:19:59.370 "product_name": "Malloc disk", 00:19:59.370 "block_size": 512, 00:19:59.370 "num_blocks": 65536, 00:19:59.370 "uuid": "a81f259a-f85a-41ca-adb2-36fcf5bd761e", 00:19:59.370 "assigned_rate_limits": { 00:19:59.370 "rw_ios_per_sec": 0, 00:19:59.370 "rw_mbytes_per_sec": 0, 00:19:59.370 "r_mbytes_per_sec": 0, 00:19:59.370 "w_mbytes_per_sec": 0 00:19:59.370 }, 00:19:59.370 "claimed": true, 00:19:59.370 "claim_type": "exclusive_write", 00:19:59.370 "zoned": false, 00:19:59.370 "supported_io_types": { 00:19:59.370 "read": true, 00:19:59.370 "write": true, 00:19:59.370 "unmap": true, 00:19:59.370 "write_zeroes": true, 00:19:59.370 "flush": true, 00:19:59.370 "reset": true, 00:19:59.370 "compare": false, 00:19:59.370 "compare_and_write": false, 00:19:59.370 "abort": true, 00:19:59.370 "nvme_admin": false, 00:19:59.370 "nvme_io": false 00:19:59.370 }, 00:19:59.370 "memory_domains": [ 00:19:59.370 { 00:19:59.370 "dma_device_id": "system", 00:19:59.370 "dma_device_type": 1 00:19:59.370 }, 00:19:59.370 { 00:19:59.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:59.370 "dma_device_type": 2 00:19:59.370 } 00:19:59.370 ], 00:19:59.370 "driver_specific": {} 00:19:59.370 } 00:19:59.370 ] 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.370 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:59.628 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:19:59.628 "name": "Existed_Raid", 00:19:59.628 "uuid": "36106845-a3ea-4743-832b-132dcd3a0a97", 00:19:59.628 "strip_size_kb": 0, 00:19:59.628 "state": "configuring", 00:19:59.628 "raid_level": "raid1", 00:19:59.628 "superblock": true, 00:19:59.628 "num_base_bdevs": 3, 00:19:59.628 "num_base_bdevs_discovered": 2, 00:19:59.628 "num_base_bdevs_operational": 3, 00:19:59.628 "base_bdevs_list": [ 00:19:59.628 { 00:19:59.628 "name": "BaseBdev1", 00:19:59.628 "uuid": "e513ee0b-dd7f-4266-a0d7-be0a53e89584", 00:19:59.628 "is_configured": true, 00:19:59.628 "data_offset": 2048, 00:19:59.628 "data_size": 63488 00:19:59.628 }, 00:19:59.628 { 00:19:59.628 "name": "BaseBdev2", 00:19:59.628 "uuid": "a81f259a-f85a-41ca-adb2-36fcf5bd761e", 00:19:59.628 "is_configured": true, 00:19:59.628 "data_offset": 2048, 00:19:59.628 "data_size": 63488 00:19:59.628 }, 00:19:59.628 { 00:19:59.628 "name": "BaseBdev3", 00:19:59.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.628 "is_configured": false, 00:19:59.628 "data_offset": 0, 00:19:59.628 "data_size": 0 00:19:59.628 } 00:19:59.628 ] 00:19:59.628 }' 00:19:59.628 12:01:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:19:59.628 12:01:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:00.191 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:00.449 [2024-07-21 12:01:59.244047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:00.449 [2024-07-21 12:01:59.244626] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:20:00.449 [2024-07-21 12:01:59.244773] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:00.449 [2024-07-21 12:01:59.245062] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:00.449 BaseBdev3 00:20:00.449 [2024-07-21 12:01:59.245708] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:20:00.449 [2024-07-21 12:01:59.245731] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:20:00.449 [2024-07-21 12:01:59.245898] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.449 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:20:00.449 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:00.449 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:00.449 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:00.449 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:00.449 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:00.449 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:00.707 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:00.964 [ 00:20:00.964 { 00:20:00.964 "name": "BaseBdev3", 00:20:00.964 "aliases": [ 00:20:00.964 "f03868a1-cf23-4a5c-921f-cfceb0699d81" 00:20:00.964 ], 00:20:00.964 "product_name": "Malloc disk", 00:20:00.964 "block_size": 512, 00:20:00.964 "num_blocks": 65536, 00:20:00.964 "uuid": "f03868a1-cf23-4a5c-921f-cfceb0699d81", 00:20:00.964 "assigned_rate_limits": { 00:20:00.964 "rw_ios_per_sec": 0, 00:20:00.964 "rw_mbytes_per_sec": 0, 00:20:00.964 "r_mbytes_per_sec": 0, 00:20:00.964 "w_mbytes_per_sec": 0 00:20:00.964 }, 00:20:00.964 "claimed": true, 00:20:00.964 "claim_type": "exclusive_write", 00:20:00.964 "zoned": false, 00:20:00.964 "supported_io_types": { 00:20:00.964 "read": true, 00:20:00.964 "write": true, 00:20:00.964 "unmap": true, 00:20:00.964 "write_zeroes": true, 00:20:00.964 "flush": true, 00:20:00.964 "reset": true, 00:20:00.964 "compare": false, 00:20:00.964 "compare_and_write": false, 00:20:00.964 "abort": true, 00:20:00.964 "nvme_admin": false, 00:20:00.964 "nvme_io": false 00:20:00.964 }, 00:20:00.964 "memory_domains": [ 00:20:00.964 { 00:20:00.964 "dma_device_id": "system", 00:20:00.964 "dma_device_type": 1 00:20:00.964 }, 00:20:00.964 { 00:20:00.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:00.964 "dma_device_type": 2 00:20:00.964 } 00:20:00.964 ], 00:20:00.964 "driver_specific": {} 00:20:00.964 } 00:20:00.964 ] 00:20:00.964 12:01:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:00.964 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:20:00.964 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:20:00.964 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:00.964 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:00.965 12:01:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:01.222 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:01.222 "name": "Existed_Raid", 00:20:01.222 "uuid": "36106845-a3ea-4743-832b-132dcd3a0a97", 00:20:01.222 "strip_size_kb": 0, 00:20:01.222 "state": "online", 00:20:01.222 "raid_level": "raid1", 00:20:01.222 "superblock": true, 00:20:01.222 "num_base_bdevs": 3, 00:20:01.222 "num_base_bdevs_discovered": 3, 00:20:01.222 "num_base_bdevs_operational": 3, 00:20:01.222 "base_bdevs_list": [ 00:20:01.222 { 00:20:01.222 "name": "BaseBdev1", 00:20:01.222 "uuid": "e513ee0b-dd7f-4266-a0d7-be0a53e89584", 00:20:01.222 "is_configured": true, 00:20:01.222 "data_offset": 2048, 00:20:01.222 "data_size": 63488 00:20:01.222 }, 00:20:01.222 { 00:20:01.222 "name": "BaseBdev2", 00:20:01.222 "uuid": "a81f259a-f85a-41ca-adb2-36fcf5bd761e", 00:20:01.222 "is_configured": true, 00:20:01.222 "data_offset": 2048, 00:20:01.222 "data_size": 63488 00:20:01.222 }, 00:20:01.222 { 00:20:01.222 "name": "BaseBdev3", 00:20:01.222 "uuid": "f03868a1-cf23-4a5c-921f-cfceb0699d81", 00:20:01.222 "is_configured": true, 00:20:01.222 "data_offset": 2048, 00:20:01.222 "data_size": 63488 00:20:01.222 } 00:20:01.222 ] 00:20:01.222 }' 00:20:01.222 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:01.222 12:02:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:02.156 [2024-07-21 12:02:00.928808] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:02.156 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:02.156 "name": "Existed_Raid", 00:20:02.156 "aliases": [ 00:20:02.156 "36106845-a3ea-4743-832b-132dcd3a0a97" 00:20:02.156 ], 00:20:02.156 "product_name": "Raid Volume", 00:20:02.156 "block_size": 512, 00:20:02.156 "num_blocks": 63488, 00:20:02.156 "uuid": "36106845-a3ea-4743-832b-132dcd3a0a97", 00:20:02.156 "assigned_rate_limits": { 00:20:02.156 "rw_ios_per_sec": 0, 00:20:02.156 "rw_mbytes_per_sec": 0, 00:20:02.156 "r_mbytes_per_sec": 0, 00:20:02.156 "w_mbytes_per_sec": 0 00:20:02.156 }, 00:20:02.156 "claimed": false, 00:20:02.156 "zoned": false, 00:20:02.156 "supported_io_types": { 00:20:02.156 "read": true, 00:20:02.156 "write": true, 00:20:02.156 "unmap": false, 00:20:02.156 "write_zeroes": true, 00:20:02.156 "flush": false, 00:20:02.156 "reset": true, 00:20:02.156 "compare": false, 00:20:02.156 "compare_and_write": false, 00:20:02.156 "abort": false, 00:20:02.156 "nvme_admin": false, 00:20:02.156 "nvme_io": false 00:20:02.156 }, 00:20:02.156 "memory_domains": [ 00:20:02.156 { 00:20:02.156 "dma_device_id": "system", 00:20:02.156 "dma_device_type": 1 00:20:02.156 }, 00:20:02.156 { 00:20:02.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.156 "dma_device_type": 2 00:20:02.156 }, 00:20:02.156 { 00:20:02.156 "dma_device_id": "system", 00:20:02.156 "dma_device_type": 1 00:20:02.156 }, 00:20:02.156 { 00:20:02.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.156 "dma_device_type": 2 00:20:02.156 }, 00:20:02.156 { 00:20:02.156 "dma_device_id": "system", 00:20:02.156 "dma_device_type": 1 00:20:02.156 }, 00:20:02.156 { 00:20:02.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.156 "dma_device_type": 2 00:20:02.156 } 00:20:02.156 ], 00:20:02.156 "driver_specific": { 00:20:02.156 "raid": { 00:20:02.156 "uuid": "36106845-a3ea-4743-832b-132dcd3a0a97", 00:20:02.156 "strip_size_kb": 0, 00:20:02.157 "state": "online", 00:20:02.157 "raid_level": "raid1", 00:20:02.157 "superblock": true, 00:20:02.157 "num_base_bdevs": 3, 00:20:02.157 "num_base_bdevs_discovered": 3, 00:20:02.157 "num_base_bdevs_operational": 3, 00:20:02.157 "base_bdevs_list": [ 00:20:02.157 { 00:20:02.157 "name": "BaseBdev1", 00:20:02.157 "uuid": "e513ee0b-dd7f-4266-a0d7-be0a53e89584", 00:20:02.157 "is_configured": true, 00:20:02.157 "data_offset": 2048, 00:20:02.157 "data_size": 63488 00:20:02.157 }, 00:20:02.157 { 00:20:02.157 "name": "BaseBdev2", 00:20:02.157 "uuid": "a81f259a-f85a-41ca-adb2-36fcf5bd761e", 00:20:02.157 "is_configured": true, 00:20:02.157 "data_offset": 2048, 00:20:02.157 "data_size": 63488 00:20:02.157 }, 00:20:02.157 { 00:20:02.157 "name": "BaseBdev3", 00:20:02.157 "uuid": "f03868a1-cf23-4a5c-921f-cfceb0699d81", 00:20:02.157 "is_configured": true, 00:20:02.157 "data_offset": 2048, 00:20:02.157 "data_size": 63488 00:20:02.157 } 00:20:02.157 ] 00:20:02.157 } 00:20:02.157 } 00:20:02.157 }' 00:20:02.157 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:02.157 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:20:02.157 BaseBdev2 00:20:02.157 BaseBdev3' 00:20:02.157 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.157 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:20:02.157 12:02:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:02.415 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:02.415 "name": "BaseBdev1", 00:20:02.415 "aliases": [ 00:20:02.415 "e513ee0b-dd7f-4266-a0d7-be0a53e89584" 00:20:02.415 ], 00:20:02.415 "product_name": "Malloc disk", 00:20:02.415 "block_size": 512, 00:20:02.415 "num_blocks": 65536, 00:20:02.415 "uuid": "e513ee0b-dd7f-4266-a0d7-be0a53e89584", 00:20:02.415 "assigned_rate_limits": { 00:20:02.415 "rw_ios_per_sec": 0, 00:20:02.415 "rw_mbytes_per_sec": 0, 00:20:02.415 "r_mbytes_per_sec": 0, 00:20:02.415 "w_mbytes_per_sec": 0 00:20:02.415 }, 00:20:02.415 "claimed": true, 00:20:02.415 "claim_type": "exclusive_write", 00:20:02.415 "zoned": false, 00:20:02.415 "supported_io_types": { 00:20:02.415 "read": true, 00:20:02.415 "write": true, 00:20:02.415 "unmap": true, 00:20:02.415 "write_zeroes": true, 00:20:02.415 "flush": true, 00:20:02.415 "reset": true, 00:20:02.415 "compare": false, 00:20:02.415 "compare_and_write": false, 00:20:02.415 "abort": true, 00:20:02.415 "nvme_admin": false, 00:20:02.415 "nvme_io": false 00:20:02.415 }, 00:20:02.415 "memory_domains": [ 00:20:02.415 { 00:20:02.415 "dma_device_id": "system", 00:20:02.415 "dma_device_type": 1 00:20:02.415 }, 00:20:02.415 { 00:20:02.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:02.415 "dma_device_type": 2 00:20:02.415 } 00:20:02.415 ], 00:20:02.415 "driver_specific": {} 00:20:02.415 }' 00:20:02.415 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.673 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:02.673 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:02.673 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.673 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:02.673 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:02.673 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.673 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:02.931 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:02.931 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.931 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:02.931 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:02.931 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:02.931 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:02.931 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.190 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.190 "name": "BaseBdev2", 00:20:03.190 "aliases": [ 00:20:03.190 "a81f259a-f85a-41ca-adb2-36fcf5bd761e" 00:20:03.190 ], 00:20:03.190 "product_name": "Malloc disk", 00:20:03.190 "block_size": 512, 00:20:03.190 "num_blocks": 65536, 00:20:03.190 "uuid": "a81f259a-f85a-41ca-adb2-36fcf5bd761e", 00:20:03.190 "assigned_rate_limits": { 00:20:03.190 "rw_ios_per_sec": 0, 00:20:03.190 "rw_mbytes_per_sec": 0, 00:20:03.190 "r_mbytes_per_sec": 0, 00:20:03.190 "w_mbytes_per_sec": 0 00:20:03.190 }, 00:20:03.190 "claimed": true, 00:20:03.190 "claim_type": "exclusive_write", 00:20:03.190 "zoned": false, 00:20:03.190 "supported_io_types": { 00:20:03.190 "read": true, 00:20:03.190 "write": true, 00:20:03.190 "unmap": true, 00:20:03.190 "write_zeroes": true, 00:20:03.190 "flush": true, 00:20:03.190 "reset": true, 00:20:03.190 "compare": false, 00:20:03.190 "compare_and_write": false, 00:20:03.190 "abort": true, 00:20:03.190 "nvme_admin": false, 00:20:03.190 "nvme_io": false 00:20:03.190 }, 00:20:03.190 "memory_domains": [ 00:20:03.190 { 00:20:03.190 "dma_device_id": "system", 00:20:03.190 "dma_device_type": 1 00:20:03.190 }, 00:20:03.190 { 00:20:03.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.190 "dma_device_type": 2 00:20:03.190 } 00:20:03.190 ], 00:20:03.190 "driver_specific": {} 00:20:03.190 }' 00:20:03.190 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.190 12:02:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.190 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.190 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.449 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.449 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.449 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.449 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.449 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:03.449 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.449 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:03.708 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:03.708 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:03.708 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:03.708 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:03.708 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:03.708 "name": "BaseBdev3", 00:20:03.708 "aliases": [ 00:20:03.708 "f03868a1-cf23-4a5c-921f-cfceb0699d81" 00:20:03.708 ], 00:20:03.708 "product_name": "Malloc disk", 00:20:03.708 "block_size": 512, 00:20:03.708 "num_blocks": 65536, 00:20:03.708 "uuid": "f03868a1-cf23-4a5c-921f-cfceb0699d81", 00:20:03.708 "assigned_rate_limits": { 00:20:03.708 "rw_ios_per_sec": 0, 00:20:03.708 "rw_mbytes_per_sec": 0, 00:20:03.708 "r_mbytes_per_sec": 0, 00:20:03.708 "w_mbytes_per_sec": 0 00:20:03.708 }, 00:20:03.708 "claimed": true, 00:20:03.708 "claim_type": "exclusive_write", 00:20:03.708 "zoned": false, 00:20:03.708 "supported_io_types": { 00:20:03.708 "read": true, 00:20:03.708 "write": true, 00:20:03.708 "unmap": true, 00:20:03.708 "write_zeroes": true, 00:20:03.708 "flush": true, 00:20:03.708 "reset": true, 00:20:03.708 "compare": false, 00:20:03.708 "compare_and_write": false, 00:20:03.708 "abort": true, 00:20:03.708 "nvme_admin": false, 00:20:03.708 "nvme_io": false 00:20:03.708 }, 00:20:03.708 "memory_domains": [ 00:20:03.708 { 00:20:03.708 "dma_device_id": "system", 00:20:03.708 "dma_device_type": 1 00:20:03.708 }, 00:20:03.708 { 00:20:03.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:03.708 "dma_device_type": 2 00:20:03.708 } 00:20:03.708 ], 00:20:03.708 "driver_specific": {} 00:20:03.708 }' 00:20:03.708 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.968 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:03.968 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:03.968 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.968 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:03.968 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:03.968 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:03.968 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:04.227 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:04.227 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.227 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:04.227 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:04.227 12:02:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:04.555 [2024-07-21 12:02:03.221136] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.555 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:04.843 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:04.843 "name": "Existed_Raid", 00:20:04.843 "uuid": "36106845-a3ea-4743-832b-132dcd3a0a97", 00:20:04.843 "strip_size_kb": 0, 00:20:04.843 "state": "online", 00:20:04.843 "raid_level": "raid1", 00:20:04.843 "superblock": true, 00:20:04.843 "num_base_bdevs": 3, 00:20:04.843 "num_base_bdevs_discovered": 2, 00:20:04.843 "num_base_bdevs_operational": 2, 00:20:04.843 "base_bdevs_list": [ 00:20:04.843 { 00:20:04.843 "name": null, 00:20:04.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.843 "is_configured": false, 00:20:04.843 "data_offset": 2048, 00:20:04.843 "data_size": 63488 00:20:04.843 }, 00:20:04.843 { 00:20:04.843 "name": "BaseBdev2", 00:20:04.843 "uuid": "a81f259a-f85a-41ca-adb2-36fcf5bd761e", 00:20:04.843 "is_configured": true, 00:20:04.843 "data_offset": 2048, 00:20:04.843 "data_size": 63488 00:20:04.843 }, 00:20:04.843 { 00:20:04.843 "name": "BaseBdev3", 00:20:04.843 "uuid": "f03868a1-cf23-4a5c-921f-cfceb0699d81", 00:20:04.843 "is_configured": true, 00:20:04.843 "data_offset": 2048, 00:20:04.843 "data_size": 63488 00:20:04.843 } 00:20:04.843 ] 00:20:04.843 }' 00:20:04.843 12:02:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:04.843 12:02:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:05.407 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:20:05.407 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:05.407 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.408 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:05.665 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:05.665 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:05.665 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:20:05.922 [2024-07-21 12:02:04.609106] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:05.922 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:05.922 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:05.922 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.922 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:20:06.179 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:20:06.179 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:06.179 12:02:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:20:06.436 [2024-07-21 12:02:05.112396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:06.436 [2024-07-21 12:02:05.112803] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:06.436 [2024-07-21 12:02:05.123459] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.436 [2024-07-21 12:02:05.123749] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.436 [2024-07-21 12:02:05.123872] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:20:06.436 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:20:06.436 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:20:06.436 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.436 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:20:06.694 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:20:06.694 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:20:06.694 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:20:06.694 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:20:06.694 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:06.694 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:06.951 BaseBdev2 00:20:06.951 12:02:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:20:06.951 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:20:06.951 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:06.951 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:06.951 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:06.951 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:06.951 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.208 12:02:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:07.208 [ 00:20:07.208 { 00:20:07.208 "name": "BaseBdev2", 00:20:07.208 "aliases": [ 00:20:07.208 "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d" 00:20:07.208 ], 00:20:07.208 "product_name": "Malloc disk", 00:20:07.208 "block_size": 512, 00:20:07.208 "num_blocks": 65536, 00:20:07.208 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:07.208 "assigned_rate_limits": { 00:20:07.208 "rw_ios_per_sec": 0, 00:20:07.208 "rw_mbytes_per_sec": 0, 00:20:07.208 "r_mbytes_per_sec": 0, 00:20:07.208 "w_mbytes_per_sec": 0 00:20:07.208 }, 00:20:07.208 "claimed": false, 00:20:07.208 "zoned": false, 00:20:07.208 "supported_io_types": { 00:20:07.208 "read": true, 00:20:07.208 "write": true, 00:20:07.208 "unmap": true, 00:20:07.208 "write_zeroes": true, 00:20:07.208 "flush": true, 00:20:07.208 "reset": true, 00:20:07.208 "compare": false, 00:20:07.208 "compare_and_write": false, 00:20:07.208 "abort": true, 00:20:07.208 "nvme_admin": false, 00:20:07.208 "nvme_io": false 00:20:07.208 }, 00:20:07.208 "memory_domains": [ 00:20:07.208 { 00:20:07.208 "dma_device_id": "system", 00:20:07.208 "dma_device_type": 1 00:20:07.208 }, 00:20:07.208 { 00:20:07.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.208 "dma_device_type": 2 00:20:07.208 } 00:20:07.208 ], 00:20:07.208 "driver_specific": {} 00:20:07.208 } 00:20:07.208 ] 00:20:07.208 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:07.208 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:07.208 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:07.208 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:07.466 BaseBdev3 00:20:07.466 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:20:07.466 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:20:07.466 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:07.466 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:07.466 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:07.466 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:07.466 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:07.723 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:20:07.980 [ 00:20:07.980 { 00:20:07.980 "name": "BaseBdev3", 00:20:07.980 "aliases": [ 00:20:07.980 "02e34a29-49e1-4cab-bd7e-07e01d293104" 00:20:07.980 ], 00:20:07.980 "product_name": "Malloc disk", 00:20:07.980 "block_size": 512, 00:20:07.980 "num_blocks": 65536, 00:20:07.980 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:07.980 "assigned_rate_limits": { 00:20:07.980 "rw_ios_per_sec": 0, 00:20:07.980 "rw_mbytes_per_sec": 0, 00:20:07.980 "r_mbytes_per_sec": 0, 00:20:07.980 "w_mbytes_per_sec": 0 00:20:07.980 }, 00:20:07.980 "claimed": false, 00:20:07.980 "zoned": false, 00:20:07.980 "supported_io_types": { 00:20:07.980 "read": true, 00:20:07.980 "write": true, 00:20:07.980 "unmap": true, 00:20:07.980 "write_zeroes": true, 00:20:07.980 "flush": true, 00:20:07.980 "reset": true, 00:20:07.980 "compare": false, 00:20:07.980 "compare_and_write": false, 00:20:07.980 "abort": true, 00:20:07.980 "nvme_admin": false, 00:20:07.980 "nvme_io": false 00:20:07.980 }, 00:20:07.980 "memory_domains": [ 00:20:07.980 { 00:20:07.980 "dma_device_id": "system", 00:20:07.980 "dma_device_type": 1 00:20:07.980 }, 00:20:07.980 { 00:20:07.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:07.980 "dma_device_type": 2 00:20:07.980 } 00:20:07.980 ], 00:20:07.980 "driver_specific": {} 00:20:07.980 } 00:20:07.980 ] 00:20:07.980 12:02:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:07.980 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:20:07.980 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:20:07.980 12:02:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:20:08.238 [2024-07-21 12:02:06.994866] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:08.238 [2024-07-21 12:02:06.995287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:08.238 [2024-07-21 12:02:06.995479] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:08.238 [2024-07-21 12:02:06.997663] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:08.238 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:08.496 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:08.496 "name": "Existed_Raid", 00:20:08.496 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:08.496 "strip_size_kb": 0, 00:20:08.496 "state": "configuring", 00:20:08.496 "raid_level": "raid1", 00:20:08.496 "superblock": true, 00:20:08.496 "num_base_bdevs": 3, 00:20:08.496 "num_base_bdevs_discovered": 2, 00:20:08.496 "num_base_bdevs_operational": 3, 00:20:08.496 "base_bdevs_list": [ 00:20:08.496 { 00:20:08.496 "name": "BaseBdev1", 00:20:08.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:08.497 "is_configured": false, 00:20:08.497 "data_offset": 0, 00:20:08.497 "data_size": 0 00:20:08.497 }, 00:20:08.497 { 00:20:08.497 "name": "BaseBdev2", 00:20:08.497 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:08.497 "is_configured": true, 00:20:08.497 "data_offset": 2048, 00:20:08.497 "data_size": 63488 00:20:08.497 }, 00:20:08.497 { 00:20:08.497 "name": "BaseBdev3", 00:20:08.497 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:08.497 "is_configured": true, 00:20:08.497 "data_offset": 2048, 00:20:08.497 "data_size": 63488 00:20:08.497 } 00:20:08.497 ] 00:20:08.497 }' 00:20:08.497 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:08.497 12:02:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:09.064 12:02:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:09.323 [2024-07-21 12:02:08.111163] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.323 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.582 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:09.582 "name": "Existed_Raid", 00:20:09.582 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:09.582 "strip_size_kb": 0, 00:20:09.582 "state": "configuring", 00:20:09.582 "raid_level": "raid1", 00:20:09.582 "superblock": true, 00:20:09.582 "num_base_bdevs": 3, 00:20:09.582 "num_base_bdevs_discovered": 1, 00:20:09.582 "num_base_bdevs_operational": 3, 00:20:09.582 "base_bdevs_list": [ 00:20:09.582 { 00:20:09.582 "name": "BaseBdev1", 00:20:09.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.582 "is_configured": false, 00:20:09.582 "data_offset": 0, 00:20:09.582 "data_size": 0 00:20:09.582 }, 00:20:09.582 { 00:20:09.582 "name": null, 00:20:09.582 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:09.582 "is_configured": false, 00:20:09.582 "data_offset": 2048, 00:20:09.582 "data_size": 63488 00:20:09.582 }, 00:20:09.582 { 00:20:09.582 "name": "BaseBdev3", 00:20:09.582 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:09.582 "is_configured": true, 00:20:09.582 "data_offset": 2048, 00:20:09.582 "data_size": 63488 00:20:09.582 } 00:20:09.582 ] 00:20:09.582 }' 00:20:09.582 12:02:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:09.582 12:02:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:10.517 12:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.517 12:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:10.517 12:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:20:10.517 12:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:10.774 [2024-07-21 12:02:09.568303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.774 BaseBdev1 00:20:10.774 12:02:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:20:10.774 12:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:20:10.774 12:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:10.774 12:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:10.774 12:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:10.774 12:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:10.774 12:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:11.032 12:02:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:11.291 [ 00:20:11.291 { 00:20:11.291 "name": "BaseBdev1", 00:20:11.291 "aliases": [ 00:20:11.291 "7c7d41a4-e913-4cd0-9451-7c609101b550" 00:20:11.291 ], 00:20:11.291 "product_name": "Malloc disk", 00:20:11.291 "block_size": 512, 00:20:11.291 "num_blocks": 65536, 00:20:11.291 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:11.291 "assigned_rate_limits": { 00:20:11.291 "rw_ios_per_sec": 0, 00:20:11.291 "rw_mbytes_per_sec": 0, 00:20:11.291 "r_mbytes_per_sec": 0, 00:20:11.291 "w_mbytes_per_sec": 0 00:20:11.291 }, 00:20:11.291 "claimed": true, 00:20:11.291 "claim_type": "exclusive_write", 00:20:11.291 "zoned": false, 00:20:11.291 "supported_io_types": { 00:20:11.291 "read": true, 00:20:11.291 "write": true, 00:20:11.291 "unmap": true, 00:20:11.291 "write_zeroes": true, 00:20:11.291 "flush": true, 00:20:11.291 "reset": true, 00:20:11.291 "compare": false, 00:20:11.291 "compare_and_write": false, 00:20:11.291 "abort": true, 00:20:11.291 "nvme_admin": false, 00:20:11.291 "nvme_io": false 00:20:11.291 }, 00:20:11.291 "memory_domains": [ 00:20:11.291 { 00:20:11.291 "dma_device_id": "system", 00:20:11.291 "dma_device_type": 1 00:20:11.291 }, 00:20:11.291 { 00:20:11.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.291 "dma_device_type": 2 00:20:11.291 } 00:20:11.291 ], 00:20:11.291 "driver_specific": {} 00:20:11.291 } 00:20:11.291 ] 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:11.291 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.550 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:11.550 "name": "Existed_Raid", 00:20:11.550 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:11.550 "strip_size_kb": 0, 00:20:11.550 "state": "configuring", 00:20:11.550 "raid_level": "raid1", 00:20:11.550 "superblock": true, 00:20:11.550 "num_base_bdevs": 3, 00:20:11.550 "num_base_bdevs_discovered": 2, 00:20:11.550 "num_base_bdevs_operational": 3, 00:20:11.550 "base_bdevs_list": [ 00:20:11.550 { 00:20:11.550 "name": "BaseBdev1", 00:20:11.550 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:11.550 "is_configured": true, 00:20:11.550 "data_offset": 2048, 00:20:11.550 "data_size": 63488 00:20:11.550 }, 00:20:11.550 { 00:20:11.550 "name": null, 00:20:11.550 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:11.550 "is_configured": false, 00:20:11.550 "data_offset": 2048, 00:20:11.550 "data_size": 63488 00:20:11.550 }, 00:20:11.550 { 00:20:11.550 "name": "BaseBdev3", 00:20:11.550 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:11.550 "is_configured": true, 00:20:11.550 "data_offset": 2048, 00:20:11.550 "data_size": 63488 00:20:11.550 } 00:20:11.550 ] 00:20:11.550 }' 00:20:11.550 12:02:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:11.550 12:02:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:12.483 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.483 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:12.483 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:20:12.483 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:20:12.741 [2024-07-21 12:02:11.540866] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.741 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:13.000 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:13.000 "name": "Existed_Raid", 00:20:13.000 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:13.000 "strip_size_kb": 0, 00:20:13.000 "state": "configuring", 00:20:13.000 "raid_level": "raid1", 00:20:13.000 "superblock": true, 00:20:13.000 "num_base_bdevs": 3, 00:20:13.000 "num_base_bdevs_discovered": 1, 00:20:13.000 "num_base_bdevs_operational": 3, 00:20:13.000 "base_bdevs_list": [ 00:20:13.000 { 00:20:13.000 "name": "BaseBdev1", 00:20:13.000 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:13.000 "is_configured": true, 00:20:13.000 "data_offset": 2048, 00:20:13.000 "data_size": 63488 00:20:13.000 }, 00:20:13.000 { 00:20:13.000 "name": null, 00:20:13.000 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:13.000 "is_configured": false, 00:20:13.000 "data_offset": 2048, 00:20:13.000 "data_size": 63488 00:20:13.000 }, 00:20:13.000 { 00:20:13.000 "name": null, 00:20:13.000 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:13.000 "is_configured": false, 00:20:13.000 "data_offset": 2048, 00:20:13.000 "data_size": 63488 00:20:13.000 } 00:20:13.000 ] 00:20:13.000 }' 00:20:13.000 12:02:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:13.000 12:02:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:13.934 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.934 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:13.934 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:20:13.934 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:20:14.192 [2024-07-21 12:02:12.945189] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:14.192 12:02:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.450 12:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:14.450 "name": "Existed_Raid", 00:20:14.450 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:14.450 "strip_size_kb": 0, 00:20:14.450 "state": "configuring", 00:20:14.450 "raid_level": "raid1", 00:20:14.450 "superblock": true, 00:20:14.450 "num_base_bdevs": 3, 00:20:14.450 "num_base_bdevs_discovered": 2, 00:20:14.450 "num_base_bdevs_operational": 3, 00:20:14.450 "base_bdevs_list": [ 00:20:14.450 { 00:20:14.450 "name": "BaseBdev1", 00:20:14.450 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:14.450 "is_configured": true, 00:20:14.450 "data_offset": 2048, 00:20:14.450 "data_size": 63488 00:20:14.450 }, 00:20:14.450 { 00:20:14.450 "name": null, 00:20:14.450 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:14.450 "is_configured": false, 00:20:14.450 "data_offset": 2048, 00:20:14.450 "data_size": 63488 00:20:14.450 }, 00:20:14.450 { 00:20:14.450 "name": "BaseBdev3", 00:20:14.450 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:14.450 "is_configured": true, 00:20:14.450 "data_offset": 2048, 00:20:14.450 "data_size": 63488 00:20:14.450 } 00:20:14.450 ] 00:20:14.450 }' 00:20:14.450 12:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:14.450 12:02:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:15.383 12:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.383 12:02:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:20:15.383 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:20:15.383 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:20:15.641 [2024-07-21 12:02:14.473614] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:15.641 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:15.898 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:15.898 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.898 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:16.156 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:16.156 "name": "Existed_Raid", 00:20:16.156 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:16.156 "strip_size_kb": 0, 00:20:16.156 "state": "configuring", 00:20:16.156 "raid_level": "raid1", 00:20:16.156 "superblock": true, 00:20:16.156 "num_base_bdevs": 3, 00:20:16.156 "num_base_bdevs_discovered": 1, 00:20:16.156 "num_base_bdevs_operational": 3, 00:20:16.156 "base_bdevs_list": [ 00:20:16.156 { 00:20:16.156 "name": null, 00:20:16.156 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:16.156 "is_configured": false, 00:20:16.156 "data_offset": 2048, 00:20:16.156 "data_size": 63488 00:20:16.156 }, 00:20:16.156 { 00:20:16.156 "name": null, 00:20:16.156 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:16.156 "is_configured": false, 00:20:16.156 "data_offset": 2048, 00:20:16.156 "data_size": 63488 00:20:16.156 }, 00:20:16.156 { 00:20:16.156 "name": "BaseBdev3", 00:20:16.156 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:16.156 "is_configured": true, 00:20:16.156 "data_offset": 2048, 00:20:16.156 "data_size": 63488 00:20:16.156 } 00:20:16.156 ] 00:20:16.156 }' 00:20:16.156 12:02:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:16.156 12:02:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:16.772 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:20:16.772 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.030 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:20:17.030 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:20:17.288 [2024-07-21 12:02:15.956444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:17.288 12:02:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.547 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:17.547 "name": "Existed_Raid", 00:20:17.547 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:17.547 "strip_size_kb": 0, 00:20:17.547 "state": "configuring", 00:20:17.547 "raid_level": "raid1", 00:20:17.547 "superblock": true, 00:20:17.547 "num_base_bdevs": 3, 00:20:17.547 "num_base_bdevs_discovered": 2, 00:20:17.547 "num_base_bdevs_operational": 3, 00:20:17.547 "base_bdevs_list": [ 00:20:17.547 { 00:20:17.547 "name": null, 00:20:17.547 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:17.547 "is_configured": false, 00:20:17.547 "data_offset": 2048, 00:20:17.547 "data_size": 63488 00:20:17.547 }, 00:20:17.547 { 00:20:17.547 "name": "BaseBdev2", 00:20:17.547 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:17.547 "is_configured": true, 00:20:17.547 "data_offset": 2048, 00:20:17.547 "data_size": 63488 00:20:17.547 }, 00:20:17.547 { 00:20:17.547 "name": "BaseBdev3", 00:20:17.547 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:17.547 "is_configured": true, 00:20:17.547 "data_offset": 2048, 00:20:17.547 "data_size": 63488 00:20:17.547 } 00:20:17.547 ] 00:20:17.547 }' 00:20:17.547 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:17.547 12:02:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:18.114 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.114 12:02:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:20:18.372 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:20:18.372 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.372 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:20:18.631 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7c7d41a4-e913-4cd0-9451-7c609101b550 00:20:18.889 [2024-07-21 12:02:17.596056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:20:18.889 [2024-07-21 12:02:17.596680] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:20:18.889 [2024-07-21 12:02:17.596823] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:18.890 [2024-07-21 12:02:17.596946] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:18.890 [2024-07-21 12:02:17.597416] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:20:18.890 [2024-07-21 12:02:17.597556] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:20:18.890 NewBaseBdev 00:20:18.890 [2024-07-21 12:02:17.597790] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.890 12:02:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:20:18.890 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:20:18.890 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:20:18.890 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:20:18.890 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:20:18.890 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:20:18.890 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:20:19.148 12:02:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:20:19.430 [ 00:20:19.430 { 00:20:19.430 "name": "NewBaseBdev", 00:20:19.430 "aliases": [ 00:20:19.430 "7c7d41a4-e913-4cd0-9451-7c609101b550" 00:20:19.430 ], 00:20:19.430 "product_name": "Malloc disk", 00:20:19.430 "block_size": 512, 00:20:19.430 "num_blocks": 65536, 00:20:19.430 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:19.430 "assigned_rate_limits": { 00:20:19.430 "rw_ios_per_sec": 0, 00:20:19.430 "rw_mbytes_per_sec": 0, 00:20:19.430 "r_mbytes_per_sec": 0, 00:20:19.430 "w_mbytes_per_sec": 0 00:20:19.430 }, 00:20:19.430 "claimed": true, 00:20:19.430 "claim_type": "exclusive_write", 00:20:19.430 "zoned": false, 00:20:19.430 "supported_io_types": { 00:20:19.430 "read": true, 00:20:19.430 "write": true, 00:20:19.430 "unmap": true, 00:20:19.430 "write_zeroes": true, 00:20:19.430 "flush": true, 00:20:19.430 "reset": true, 00:20:19.430 "compare": false, 00:20:19.430 "compare_and_write": false, 00:20:19.430 "abort": true, 00:20:19.430 "nvme_admin": false, 00:20:19.430 "nvme_io": false 00:20:19.430 }, 00:20:19.430 "memory_domains": [ 00:20:19.430 { 00:20:19.430 "dma_device_id": "system", 00:20:19.430 "dma_device_type": 1 00:20:19.430 }, 00:20:19.430 { 00:20:19.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:19.430 "dma_device_type": 2 00:20:19.430 } 00:20:19.430 ], 00:20:19.430 "driver_specific": {} 00:20:19.430 } 00:20:19.430 ] 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.430 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:19.699 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:19.699 "name": "Existed_Raid", 00:20:19.699 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:19.699 "strip_size_kb": 0, 00:20:19.699 "state": "online", 00:20:19.699 "raid_level": "raid1", 00:20:19.699 "superblock": true, 00:20:19.699 "num_base_bdevs": 3, 00:20:19.699 "num_base_bdevs_discovered": 3, 00:20:19.699 "num_base_bdevs_operational": 3, 00:20:19.699 "base_bdevs_list": [ 00:20:19.699 { 00:20:19.699 "name": "NewBaseBdev", 00:20:19.699 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:19.699 "is_configured": true, 00:20:19.699 "data_offset": 2048, 00:20:19.699 "data_size": 63488 00:20:19.699 }, 00:20:19.699 { 00:20:19.699 "name": "BaseBdev2", 00:20:19.699 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:19.699 "is_configured": true, 00:20:19.699 "data_offset": 2048, 00:20:19.699 "data_size": 63488 00:20:19.699 }, 00:20:19.699 { 00:20:19.699 "name": "BaseBdev3", 00:20:19.699 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:19.699 "is_configured": true, 00:20:19.699 "data_offset": 2048, 00:20:19.699 "data_size": 63488 00:20:19.699 } 00:20:19.699 ] 00:20:19.699 }' 00:20:19.699 12:02:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:19.699 12:02:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:20:20.265 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:20.522 [2024-07-21 12:02:19.248821] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:20.522 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:20.522 "name": "Existed_Raid", 00:20:20.522 "aliases": [ 00:20:20.522 "6d453479-99d1-406d-b5b1-d27850528264" 00:20:20.522 ], 00:20:20.522 "product_name": "Raid Volume", 00:20:20.522 "block_size": 512, 00:20:20.522 "num_blocks": 63488, 00:20:20.522 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:20.522 "assigned_rate_limits": { 00:20:20.522 "rw_ios_per_sec": 0, 00:20:20.522 "rw_mbytes_per_sec": 0, 00:20:20.522 "r_mbytes_per_sec": 0, 00:20:20.522 "w_mbytes_per_sec": 0 00:20:20.522 }, 00:20:20.522 "claimed": false, 00:20:20.522 "zoned": false, 00:20:20.523 "supported_io_types": { 00:20:20.523 "read": true, 00:20:20.523 "write": true, 00:20:20.523 "unmap": false, 00:20:20.523 "write_zeroes": true, 00:20:20.523 "flush": false, 00:20:20.523 "reset": true, 00:20:20.523 "compare": false, 00:20:20.523 "compare_and_write": false, 00:20:20.523 "abort": false, 00:20:20.523 "nvme_admin": false, 00:20:20.523 "nvme_io": false 00:20:20.523 }, 00:20:20.523 "memory_domains": [ 00:20:20.523 { 00:20:20.523 "dma_device_id": "system", 00:20:20.523 "dma_device_type": 1 00:20:20.523 }, 00:20:20.523 { 00:20:20.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.523 "dma_device_type": 2 00:20:20.523 }, 00:20:20.523 { 00:20:20.523 "dma_device_id": "system", 00:20:20.523 "dma_device_type": 1 00:20:20.523 }, 00:20:20.523 { 00:20:20.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.523 "dma_device_type": 2 00:20:20.523 }, 00:20:20.523 { 00:20:20.523 "dma_device_id": "system", 00:20:20.523 "dma_device_type": 1 00:20:20.523 }, 00:20:20.523 { 00:20:20.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.523 "dma_device_type": 2 00:20:20.523 } 00:20:20.523 ], 00:20:20.523 "driver_specific": { 00:20:20.523 "raid": { 00:20:20.523 "uuid": "6d453479-99d1-406d-b5b1-d27850528264", 00:20:20.523 "strip_size_kb": 0, 00:20:20.523 "state": "online", 00:20:20.523 "raid_level": "raid1", 00:20:20.523 "superblock": true, 00:20:20.523 "num_base_bdevs": 3, 00:20:20.523 "num_base_bdevs_discovered": 3, 00:20:20.523 "num_base_bdevs_operational": 3, 00:20:20.523 "base_bdevs_list": [ 00:20:20.523 { 00:20:20.523 "name": "NewBaseBdev", 00:20:20.523 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:20.523 "is_configured": true, 00:20:20.523 "data_offset": 2048, 00:20:20.523 "data_size": 63488 00:20:20.523 }, 00:20:20.523 { 00:20:20.523 "name": "BaseBdev2", 00:20:20.523 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:20.523 "is_configured": true, 00:20:20.523 "data_offset": 2048, 00:20:20.523 "data_size": 63488 00:20:20.523 }, 00:20:20.523 { 00:20:20.523 "name": "BaseBdev3", 00:20:20.523 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:20.523 "is_configured": true, 00:20:20.523 "data_offset": 2048, 00:20:20.523 "data_size": 63488 00:20:20.523 } 00:20:20.523 ] 00:20:20.523 } 00:20:20.523 } 00:20:20.523 }' 00:20:20.523 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:20.523 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:20:20.523 BaseBdev2 00:20:20.523 BaseBdev3' 00:20:20.523 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:20.523 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:20:20.523 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:20.797 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:20.797 "name": "NewBaseBdev", 00:20:20.797 "aliases": [ 00:20:20.797 "7c7d41a4-e913-4cd0-9451-7c609101b550" 00:20:20.797 ], 00:20:20.797 "product_name": "Malloc disk", 00:20:20.797 "block_size": 512, 00:20:20.797 "num_blocks": 65536, 00:20:20.797 "uuid": "7c7d41a4-e913-4cd0-9451-7c609101b550", 00:20:20.797 "assigned_rate_limits": { 00:20:20.797 "rw_ios_per_sec": 0, 00:20:20.797 "rw_mbytes_per_sec": 0, 00:20:20.797 "r_mbytes_per_sec": 0, 00:20:20.797 "w_mbytes_per_sec": 0 00:20:20.797 }, 00:20:20.797 "claimed": true, 00:20:20.797 "claim_type": "exclusive_write", 00:20:20.797 "zoned": false, 00:20:20.797 "supported_io_types": { 00:20:20.797 "read": true, 00:20:20.797 "write": true, 00:20:20.797 "unmap": true, 00:20:20.797 "write_zeroes": true, 00:20:20.797 "flush": true, 00:20:20.797 "reset": true, 00:20:20.797 "compare": false, 00:20:20.797 "compare_and_write": false, 00:20:20.797 "abort": true, 00:20:20.797 "nvme_admin": false, 00:20:20.797 "nvme_io": false 00:20:20.797 }, 00:20:20.797 "memory_domains": [ 00:20:20.797 { 00:20:20.797 "dma_device_id": "system", 00:20:20.797 "dma_device_type": 1 00:20:20.797 }, 00:20:20.797 { 00:20:20.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:20.797 "dma_device_type": 2 00:20:20.797 } 00:20:20.797 ], 00:20:20.797 "driver_specific": {} 00:20:20.797 }' 00:20:20.797 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:20.797 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.056 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.314 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.314 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.314 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.314 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:20:21.314 12:02:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:21.572 "name": "BaseBdev2", 00:20:21.572 "aliases": [ 00:20:21.572 "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d" 00:20:21.572 ], 00:20:21.572 "product_name": "Malloc disk", 00:20:21.572 "block_size": 512, 00:20:21.572 "num_blocks": 65536, 00:20:21.572 "uuid": "c1f5e8e5-cb2d-46ba-8952-aefefe08d63d", 00:20:21.572 "assigned_rate_limits": { 00:20:21.572 "rw_ios_per_sec": 0, 00:20:21.572 "rw_mbytes_per_sec": 0, 00:20:21.572 "r_mbytes_per_sec": 0, 00:20:21.572 "w_mbytes_per_sec": 0 00:20:21.572 }, 00:20:21.572 "claimed": true, 00:20:21.572 "claim_type": "exclusive_write", 00:20:21.572 "zoned": false, 00:20:21.572 "supported_io_types": { 00:20:21.572 "read": true, 00:20:21.572 "write": true, 00:20:21.572 "unmap": true, 00:20:21.572 "write_zeroes": true, 00:20:21.572 "flush": true, 00:20:21.572 "reset": true, 00:20:21.572 "compare": false, 00:20:21.572 "compare_and_write": false, 00:20:21.572 "abort": true, 00:20:21.572 "nvme_admin": false, 00:20:21.572 "nvme_io": false 00:20:21.572 }, 00:20:21.572 "memory_domains": [ 00:20:21.572 { 00:20:21.572 "dma_device_id": "system", 00:20:21.572 "dma_device_type": 1 00:20:21.572 }, 00:20:21.572 { 00:20:21.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:21.572 "dma_device_type": 2 00:20:21.572 } 00:20:21.572 ], 00:20:21.572 "driver_specific": {} 00:20:21.572 }' 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:21.572 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:20:21.829 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:22.088 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:22.088 "name": "BaseBdev3", 00:20:22.088 "aliases": [ 00:20:22.088 "02e34a29-49e1-4cab-bd7e-07e01d293104" 00:20:22.088 ], 00:20:22.088 "product_name": "Malloc disk", 00:20:22.088 "block_size": 512, 00:20:22.088 "num_blocks": 65536, 00:20:22.088 "uuid": "02e34a29-49e1-4cab-bd7e-07e01d293104", 00:20:22.088 "assigned_rate_limits": { 00:20:22.088 "rw_ios_per_sec": 0, 00:20:22.088 "rw_mbytes_per_sec": 0, 00:20:22.088 "r_mbytes_per_sec": 0, 00:20:22.088 "w_mbytes_per_sec": 0 00:20:22.088 }, 00:20:22.088 "claimed": true, 00:20:22.088 "claim_type": "exclusive_write", 00:20:22.088 "zoned": false, 00:20:22.088 "supported_io_types": { 00:20:22.088 "read": true, 00:20:22.088 "write": true, 00:20:22.088 "unmap": true, 00:20:22.088 "write_zeroes": true, 00:20:22.088 "flush": true, 00:20:22.088 "reset": true, 00:20:22.088 "compare": false, 00:20:22.088 "compare_and_write": false, 00:20:22.088 "abort": true, 00:20:22.088 "nvme_admin": false, 00:20:22.088 "nvme_io": false 00:20:22.088 }, 00:20:22.088 "memory_domains": [ 00:20:22.088 { 00:20:22.088 "dma_device_id": "system", 00:20:22.088 "dma_device_type": 1 00:20:22.088 }, 00:20:22.088 { 00:20:22.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:22.088 "dma_device_type": 2 00:20:22.088 } 00:20:22.088 ], 00:20:22.088 "driver_specific": {} 00:20:22.088 }' 00:20:22.088 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.088 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:22.345 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:22.345 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.345 12:02:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:22.345 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:22.345 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.345 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:22.345 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:22.345 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.345 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:22.603 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:22.603 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:20:22.862 [2024-07-21 12:02:21.537057] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:22.862 [2024-07-21 12:02:21.537418] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.862 [2024-07-21 12:02:21.537663] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.862 [2024-07-21 12:02:21.538059] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.862 [2024-07-21 12:02:21.538192] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 142500 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 142500 ']' 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 142500 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 142500 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 142500' 00:20:22.862 killing process with pid 142500 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 142500 00:20:22.862 [2024-07-21 12:02:21.582836] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:22.862 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 142500 00:20:22.862 [2024-07-21 12:02:21.614482] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:23.120 12:02:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:20:23.120 00:20:23.120 real 0m30.248s 00:20:23.120 user 0m57.575s 00:20:23.120 sys 0m3.584s 00:20:23.120 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:23.120 12:02:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:20:23.120 ************************************ 00:20:23.120 END TEST raid_state_function_test_sb 00:20:23.120 ************************************ 00:20:23.120 12:02:21 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:20:23.120 12:02:21 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:20:23.120 12:02:21 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:23.120 12:02:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:23.120 ************************************ 00:20:23.120 START TEST raid_superblock_test 00:20:23.120 ************************************ 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 3 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:20:23.120 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=143486 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 143486 /var/tmp/spdk-raid.sock 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 143486 ']' 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:23.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:23.121 12:02:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:23.379 [2024-07-21 12:02:21.987280] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:23.379 [2024-07-21 12:02:21.987825] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143486 ] 00:20:23.379 [2024-07-21 12:02:22.155800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.637 [2024-07-21 12:02:22.250830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.637 [2024-07-21 12:02:22.304990] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.202 12:02:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:20:24.460 malloc1 00:20:24.460 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:24.717 [2024-07-21 12:02:23.479765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:24.717 [2024-07-21 12:02:23.480082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.717 [2024-07-21 12:02:23.480274] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:20:24.717 [2024-07-21 12:02:23.480448] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.717 [2024-07-21 12:02:23.483421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.718 [2024-07-21 12:02:23.483614] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:24.718 pt1 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:24.718 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:20:24.983 malloc2 00:20:24.983 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:25.240 [2024-07-21 12:02:23.983127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:25.240 [2024-07-21 12:02:23.983555] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.240 [2024-07-21 12:02:23.983748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:20:25.240 [2024-07-21 12:02:23.983917] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.240 [2024-07-21 12:02:23.986554] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.240 [2024-07-21 12:02:23.986797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:25.240 pt2 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:25.240 12:02:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:20:25.498 malloc3 00:20:25.498 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:25.755 [2024-07-21 12:02:24.457989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:25.755 [2024-07-21 12:02:24.458405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.755 [2024-07-21 12:02:24.458603] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:20:25.755 [2024-07-21 12:02:24.458785] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.755 [2024-07-21 12:02:24.461425] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.755 [2024-07-21 12:02:24.461617] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:25.755 pt3 00:20:25.755 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:20:25.755 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:20:25.755 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:20:26.013 [2024-07-21 12:02:24.734173] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:26.013 [2024-07-21 12:02:24.736586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:26.013 [2024-07-21 12:02:24.736830] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:26.013 [2024-07-21 12:02:24.737205] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:20:26.013 [2024-07-21 12:02:24.737350] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:26.013 [2024-07-21 12:02:24.737556] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:26.013 [2024-07-21 12:02:24.738140] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:20:26.013 [2024-07-21 12:02:24.738288] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:20:26.013 [2024-07-21 12:02:24.738636] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.013 12:02:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.271 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:26.271 "name": "raid_bdev1", 00:20:26.271 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:26.271 "strip_size_kb": 0, 00:20:26.271 "state": "online", 00:20:26.271 "raid_level": "raid1", 00:20:26.271 "superblock": true, 00:20:26.271 "num_base_bdevs": 3, 00:20:26.271 "num_base_bdevs_discovered": 3, 00:20:26.271 "num_base_bdevs_operational": 3, 00:20:26.271 "base_bdevs_list": [ 00:20:26.271 { 00:20:26.271 "name": "pt1", 00:20:26.271 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:26.271 "is_configured": true, 00:20:26.271 "data_offset": 2048, 00:20:26.271 "data_size": 63488 00:20:26.271 }, 00:20:26.271 { 00:20:26.271 "name": "pt2", 00:20:26.271 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:26.271 "is_configured": true, 00:20:26.271 "data_offset": 2048, 00:20:26.271 "data_size": 63488 00:20:26.271 }, 00:20:26.271 { 00:20:26.271 "name": "pt3", 00:20:26.271 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:26.271 "is_configured": true, 00:20:26.271 "data_offset": 2048, 00:20:26.271 "data_size": 63488 00:20:26.271 } 00:20:26.271 ] 00:20:26.271 }' 00:20:26.271 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:26.271 12:02:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:26.836 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:27.095 [2024-07-21 12:02:25.947324] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:27.353 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:27.353 "name": "raid_bdev1", 00:20:27.353 "aliases": [ 00:20:27.353 "57a1000b-3109-4ee3-aa36-8b02d2e53424" 00:20:27.353 ], 00:20:27.353 "product_name": "Raid Volume", 00:20:27.353 "block_size": 512, 00:20:27.353 "num_blocks": 63488, 00:20:27.353 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:27.353 "assigned_rate_limits": { 00:20:27.353 "rw_ios_per_sec": 0, 00:20:27.353 "rw_mbytes_per_sec": 0, 00:20:27.353 "r_mbytes_per_sec": 0, 00:20:27.353 "w_mbytes_per_sec": 0 00:20:27.353 }, 00:20:27.353 "claimed": false, 00:20:27.353 "zoned": false, 00:20:27.353 "supported_io_types": { 00:20:27.353 "read": true, 00:20:27.353 "write": true, 00:20:27.353 "unmap": false, 00:20:27.353 "write_zeroes": true, 00:20:27.353 "flush": false, 00:20:27.353 "reset": true, 00:20:27.353 "compare": false, 00:20:27.353 "compare_and_write": false, 00:20:27.353 "abort": false, 00:20:27.353 "nvme_admin": false, 00:20:27.353 "nvme_io": false 00:20:27.353 }, 00:20:27.353 "memory_domains": [ 00:20:27.353 { 00:20:27.353 "dma_device_id": "system", 00:20:27.353 "dma_device_type": 1 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.353 "dma_device_type": 2 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "dma_device_id": "system", 00:20:27.353 "dma_device_type": 1 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.353 "dma_device_type": 2 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "dma_device_id": "system", 00:20:27.353 "dma_device_type": 1 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.353 "dma_device_type": 2 00:20:27.353 } 00:20:27.353 ], 00:20:27.353 "driver_specific": { 00:20:27.353 "raid": { 00:20:27.353 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:27.353 "strip_size_kb": 0, 00:20:27.353 "state": "online", 00:20:27.353 "raid_level": "raid1", 00:20:27.353 "superblock": true, 00:20:27.353 "num_base_bdevs": 3, 00:20:27.353 "num_base_bdevs_discovered": 3, 00:20:27.353 "num_base_bdevs_operational": 3, 00:20:27.353 "base_bdevs_list": [ 00:20:27.353 { 00:20:27.353 "name": "pt1", 00:20:27.353 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:27.353 "is_configured": true, 00:20:27.353 "data_offset": 2048, 00:20:27.353 "data_size": 63488 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "name": "pt2", 00:20:27.353 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:27.353 "is_configured": true, 00:20:27.353 "data_offset": 2048, 00:20:27.353 "data_size": 63488 00:20:27.353 }, 00:20:27.353 { 00:20:27.353 "name": "pt3", 00:20:27.353 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:27.353 "is_configured": true, 00:20:27.353 "data_offset": 2048, 00:20:27.353 "data_size": 63488 00:20:27.353 } 00:20:27.353 ] 00:20:27.353 } 00:20:27.353 } 00:20:27.353 }' 00:20:27.353 12:02:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:27.353 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:27.353 pt2 00:20:27.353 pt3' 00:20:27.353 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.353 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:27.353 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:27.612 "name": "pt1", 00:20:27.612 "aliases": [ 00:20:27.612 "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92" 00:20:27.612 ], 00:20:27.612 "product_name": "passthru", 00:20:27.612 "block_size": 512, 00:20:27.612 "num_blocks": 65536, 00:20:27.612 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:27.612 "assigned_rate_limits": { 00:20:27.612 "rw_ios_per_sec": 0, 00:20:27.612 "rw_mbytes_per_sec": 0, 00:20:27.612 "r_mbytes_per_sec": 0, 00:20:27.612 "w_mbytes_per_sec": 0 00:20:27.612 }, 00:20:27.612 "claimed": true, 00:20:27.612 "claim_type": "exclusive_write", 00:20:27.612 "zoned": false, 00:20:27.612 "supported_io_types": { 00:20:27.612 "read": true, 00:20:27.612 "write": true, 00:20:27.612 "unmap": true, 00:20:27.612 "write_zeroes": true, 00:20:27.612 "flush": true, 00:20:27.612 "reset": true, 00:20:27.612 "compare": false, 00:20:27.612 "compare_and_write": false, 00:20:27.612 "abort": true, 00:20:27.612 "nvme_admin": false, 00:20:27.612 "nvme_io": false 00:20:27.612 }, 00:20:27.612 "memory_domains": [ 00:20:27.612 { 00:20:27.612 "dma_device_id": "system", 00:20:27.612 "dma_device_type": 1 00:20:27.612 }, 00:20:27.612 { 00:20:27.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:27.612 "dma_device_type": 2 00:20:27.612 } 00:20:27.612 ], 00:20:27.612 "driver_specific": { 00:20:27.612 "passthru": { 00:20:27.612 "name": "pt1", 00:20:27.612 "base_bdev_name": "malloc1" 00:20:27.612 } 00:20:27.612 } 00:20:27.612 }' 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:27.612 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:27.871 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:28.129 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:28.129 "name": "pt2", 00:20:28.129 "aliases": [ 00:20:28.129 "b3e97558-8088-5b22-b883-4b9c7bc8cd62" 00:20:28.129 ], 00:20:28.129 "product_name": "passthru", 00:20:28.129 "block_size": 512, 00:20:28.129 "num_blocks": 65536, 00:20:28.129 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:28.129 "assigned_rate_limits": { 00:20:28.129 "rw_ios_per_sec": 0, 00:20:28.129 "rw_mbytes_per_sec": 0, 00:20:28.129 "r_mbytes_per_sec": 0, 00:20:28.129 "w_mbytes_per_sec": 0 00:20:28.129 }, 00:20:28.129 "claimed": true, 00:20:28.129 "claim_type": "exclusive_write", 00:20:28.129 "zoned": false, 00:20:28.129 "supported_io_types": { 00:20:28.129 "read": true, 00:20:28.129 "write": true, 00:20:28.129 "unmap": true, 00:20:28.129 "write_zeroes": true, 00:20:28.129 "flush": true, 00:20:28.129 "reset": true, 00:20:28.129 "compare": false, 00:20:28.129 "compare_and_write": false, 00:20:28.129 "abort": true, 00:20:28.129 "nvme_admin": false, 00:20:28.129 "nvme_io": false 00:20:28.129 }, 00:20:28.129 "memory_domains": [ 00:20:28.129 { 00:20:28.129 "dma_device_id": "system", 00:20:28.129 "dma_device_type": 1 00:20:28.129 }, 00:20:28.129 { 00:20:28.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.129 "dma_device_type": 2 00:20:28.129 } 00:20:28.129 ], 00:20:28.129 "driver_specific": { 00:20:28.129 "passthru": { 00:20:28.129 "name": "pt2", 00:20:28.129 "base_bdev_name": "malloc2" 00:20:28.129 } 00:20:28.129 } 00:20:28.129 }' 00:20:28.129 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.129 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.129 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:28.129 12:02:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.388 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.388 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.388 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.388 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:28.388 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:28.388 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.388 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:28.646 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:28.646 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:28.646 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:28.646 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:28.904 "name": "pt3", 00:20:28.904 "aliases": [ 00:20:28.904 "055a2605-dcc8-5947-810f-f468f11f85b0" 00:20:28.904 ], 00:20:28.904 "product_name": "passthru", 00:20:28.904 "block_size": 512, 00:20:28.904 "num_blocks": 65536, 00:20:28.904 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:28.904 "assigned_rate_limits": { 00:20:28.904 "rw_ios_per_sec": 0, 00:20:28.904 "rw_mbytes_per_sec": 0, 00:20:28.904 "r_mbytes_per_sec": 0, 00:20:28.904 "w_mbytes_per_sec": 0 00:20:28.904 }, 00:20:28.904 "claimed": true, 00:20:28.904 "claim_type": "exclusive_write", 00:20:28.904 "zoned": false, 00:20:28.904 "supported_io_types": { 00:20:28.904 "read": true, 00:20:28.904 "write": true, 00:20:28.904 "unmap": true, 00:20:28.904 "write_zeroes": true, 00:20:28.904 "flush": true, 00:20:28.904 "reset": true, 00:20:28.904 "compare": false, 00:20:28.904 "compare_and_write": false, 00:20:28.904 "abort": true, 00:20:28.904 "nvme_admin": false, 00:20:28.904 "nvme_io": false 00:20:28.904 }, 00:20:28.904 "memory_domains": [ 00:20:28.904 { 00:20:28.904 "dma_device_id": "system", 00:20:28.904 "dma_device_type": 1 00:20:28.904 }, 00:20:28.904 { 00:20:28.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:28.904 "dma_device_type": 2 00:20:28.904 } 00:20:28.904 ], 00:20:28.904 "driver_specific": { 00:20:28.904 "passthru": { 00:20:28.904 "name": "pt3", 00:20:28.904 "base_bdev_name": "malloc3" 00:20:28.904 } 00:20:28.904 } 00:20:28.904 }' 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:28.904 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.162 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:29.162 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:29.162 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.162 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:29.162 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:29.162 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:29.162 12:02:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:20:29.420 [2024-07-21 12:02:28.263792] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:29.420 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=57a1000b-3109-4ee3-aa36-8b02d2e53424 00:20:29.420 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 57a1000b-3109-4ee3-aa36-8b02d2e53424 ']' 00:20:29.420 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:29.987 [2024-07-21 12:02:28.551596] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:29.987 [2024-07-21 12:02:28.551902] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:29.987 [2024-07-21 12:02:28.552230] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:29.987 [2024-07-21 12:02:28.552517] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:29.987 [2024-07-21 12:02:28.552652] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:20:29.987 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.987 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:20:29.987 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:20:29.987 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:20:29.987 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:29.987 12:02:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:30.245 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:30.245 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:30.504 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:20:30.504 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:30.763 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:20:30.763 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:20:31.022 12:02:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:20:31.282 [2024-07-21 12:02:30.047908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:31.282 [2024-07-21 12:02:30.050404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:31.282 [2024-07-21 12:02:30.050642] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:20:31.282 [2024-07-21 12:02:30.050847] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:31.282 [2024-07-21 12:02:30.051126] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:31.282 [2024-07-21 12:02:30.051303] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:20:31.282 [2024-07-21 12:02:30.051506] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:31.282 [2024-07-21 12:02:30.051635] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:20:31.282 request: 00:20:31.282 { 00:20:31.282 "name": "raid_bdev1", 00:20:31.282 "raid_level": "raid1", 00:20:31.282 "base_bdevs": [ 00:20:31.282 "malloc1", 00:20:31.282 "malloc2", 00:20:31.282 "malloc3" 00:20:31.282 ], 00:20:31.282 "superblock": false, 00:20:31.282 "method": "bdev_raid_create", 00:20:31.282 "req_id": 1 00:20:31.282 } 00:20:31.282 Got JSON-RPC error response 00:20:31.282 response: 00:20:31.282 { 00:20:31.282 "code": -17, 00:20:31.282 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:31.282 } 00:20:31.282 12:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:20:31.282 12:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:31.282 12:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:31.282 12:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:31.283 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.283 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:20:31.543 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:20:31.543 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:20:31.543 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:31.804 [2024-07-21 12:02:30.543029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:31.805 [2024-07-21 12:02:30.543407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:31.805 [2024-07-21 12:02:30.543582] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:31.805 [2024-07-21 12:02:30.543725] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:31.805 [2024-07-21 12:02:30.546653] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:31.805 [2024-07-21 12:02:30.546843] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:31.805 [2024-07-21 12:02:30.547098] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:31.805 [2024-07-21 12:02:30.547290] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:31.805 pt1 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.805 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.064 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:32.064 "name": "raid_bdev1", 00:20:32.064 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:32.064 "strip_size_kb": 0, 00:20:32.064 "state": "configuring", 00:20:32.064 "raid_level": "raid1", 00:20:32.064 "superblock": true, 00:20:32.064 "num_base_bdevs": 3, 00:20:32.064 "num_base_bdevs_discovered": 1, 00:20:32.064 "num_base_bdevs_operational": 3, 00:20:32.064 "base_bdevs_list": [ 00:20:32.064 { 00:20:32.064 "name": "pt1", 00:20:32.064 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:32.064 "is_configured": true, 00:20:32.064 "data_offset": 2048, 00:20:32.064 "data_size": 63488 00:20:32.064 }, 00:20:32.064 { 00:20:32.064 "name": null, 00:20:32.064 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:32.064 "is_configured": false, 00:20:32.064 "data_offset": 2048, 00:20:32.064 "data_size": 63488 00:20:32.064 }, 00:20:32.064 { 00:20:32.064 "name": null, 00:20:32.064 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:32.064 "is_configured": false, 00:20:32.064 "data_offset": 2048, 00:20:32.064 "data_size": 63488 00:20:32.064 } 00:20:32.064 ] 00:20:32.064 }' 00:20:32.064 12:02:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:32.064 12:02:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:32.630 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:20:32.630 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:32.889 [2024-07-21 12:02:31.687480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:32.889 [2024-07-21 12:02:31.687890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.889 [2024-07-21 12:02:31.688091] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:20:32.889 [2024-07-21 12:02:31.688283] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.889 [2024-07-21 12:02:31.688914] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.889 [2024-07-21 12:02:31.689084] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:32.889 [2024-07-21 12:02:31.689318] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:32.889 [2024-07-21 12:02:31.689463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:32.889 pt2 00:20:32.889 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:33.147 [2024-07-21 12:02:31.967551] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:33.147 12:02:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:33.713 12:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:33.713 "name": "raid_bdev1", 00:20:33.713 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:33.713 "strip_size_kb": 0, 00:20:33.713 "state": "configuring", 00:20:33.713 "raid_level": "raid1", 00:20:33.713 "superblock": true, 00:20:33.713 "num_base_bdevs": 3, 00:20:33.713 "num_base_bdevs_discovered": 1, 00:20:33.713 "num_base_bdevs_operational": 3, 00:20:33.713 "base_bdevs_list": [ 00:20:33.713 { 00:20:33.713 "name": "pt1", 00:20:33.713 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:33.713 "is_configured": true, 00:20:33.713 "data_offset": 2048, 00:20:33.713 "data_size": 63488 00:20:33.713 }, 00:20:33.713 { 00:20:33.713 "name": null, 00:20:33.713 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:33.713 "is_configured": false, 00:20:33.713 "data_offset": 2048, 00:20:33.713 "data_size": 63488 00:20:33.713 }, 00:20:33.713 { 00:20:33.713 "name": null, 00:20:33.713 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:33.713 "is_configured": false, 00:20:33.714 "data_offset": 2048, 00:20:33.714 "data_size": 63488 00:20:33.714 } 00:20:33.714 ] 00:20:33.714 }' 00:20:33.714 12:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:33.714 12:02:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:34.307 12:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:20:34.307 12:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:34.307 12:02:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:34.307 [2024-07-21 12:02:33.171753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:34.307 [2024-07-21 12:02:33.172082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.307 [2024-07-21 12:02:33.172288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:34.307 [2024-07-21 12:02:33.172434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.307 [2024-07-21 12:02:33.172983] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.307 [2024-07-21 12:02:33.173181] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:34.564 [2024-07-21 12:02:33.173425] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:34.564 [2024-07-21 12:02:33.173571] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:34.564 pt2 00:20:34.564 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:34.564 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:34.564 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:34.821 [2024-07-21 12:02:33.439835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:34.821 [2024-07-21 12:02:33.440317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:34.821 [2024-07-21 12:02:33.440528] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:34.821 [2024-07-21 12:02:33.440682] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:34.821 [2024-07-21 12:02:33.441316] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:34.821 [2024-07-21 12:02:33.441516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:34.821 [2024-07-21 12:02:33.441768] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:34.821 [2024-07-21 12:02:33.441913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:34.821 [2024-07-21 12:02:33.442285] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:20:34.821 [2024-07-21 12:02:33.442416] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:34.821 [2024-07-21 12:02:33.442551] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:34.821 [2024-07-21 12:02:33.443103] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:20:34.821 [2024-07-21 12:02:33.443249] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:20:34.821 [2024-07-21 12:02:33.443485] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.821 pt3 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:34.821 "name": "raid_bdev1", 00:20:34.821 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:34.821 "strip_size_kb": 0, 00:20:34.821 "state": "online", 00:20:34.821 "raid_level": "raid1", 00:20:34.821 "superblock": true, 00:20:34.821 "num_base_bdevs": 3, 00:20:34.821 "num_base_bdevs_discovered": 3, 00:20:34.821 "num_base_bdevs_operational": 3, 00:20:34.821 "base_bdevs_list": [ 00:20:34.821 { 00:20:34.821 "name": "pt1", 00:20:34.821 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:34.821 "is_configured": true, 00:20:34.821 "data_offset": 2048, 00:20:34.821 "data_size": 63488 00:20:34.821 }, 00:20:34.821 { 00:20:34.821 "name": "pt2", 00:20:34.821 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:34.821 "is_configured": true, 00:20:34.821 "data_offset": 2048, 00:20:34.821 "data_size": 63488 00:20:34.821 }, 00:20:34.821 { 00:20:34.821 "name": "pt3", 00:20:34.821 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:34.821 "is_configured": true, 00:20:34.821 "data_offset": 2048, 00:20:34.821 "data_size": 63488 00:20:34.821 } 00:20:34.821 ] 00:20:34.821 }' 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:34.821 12:02:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:20:35.754 [2024-07-21 12:02:34.588351] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:20:35.754 "name": "raid_bdev1", 00:20:35.754 "aliases": [ 00:20:35.754 "57a1000b-3109-4ee3-aa36-8b02d2e53424" 00:20:35.754 ], 00:20:35.754 "product_name": "Raid Volume", 00:20:35.754 "block_size": 512, 00:20:35.754 "num_blocks": 63488, 00:20:35.754 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:35.754 "assigned_rate_limits": { 00:20:35.754 "rw_ios_per_sec": 0, 00:20:35.754 "rw_mbytes_per_sec": 0, 00:20:35.754 "r_mbytes_per_sec": 0, 00:20:35.754 "w_mbytes_per_sec": 0 00:20:35.754 }, 00:20:35.754 "claimed": false, 00:20:35.754 "zoned": false, 00:20:35.754 "supported_io_types": { 00:20:35.754 "read": true, 00:20:35.754 "write": true, 00:20:35.754 "unmap": false, 00:20:35.754 "write_zeroes": true, 00:20:35.754 "flush": false, 00:20:35.754 "reset": true, 00:20:35.754 "compare": false, 00:20:35.754 "compare_and_write": false, 00:20:35.754 "abort": false, 00:20:35.754 "nvme_admin": false, 00:20:35.754 "nvme_io": false 00:20:35.754 }, 00:20:35.754 "memory_domains": [ 00:20:35.754 { 00:20:35.754 "dma_device_id": "system", 00:20:35.754 "dma_device_type": 1 00:20:35.754 }, 00:20:35.754 { 00:20:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.754 "dma_device_type": 2 00:20:35.754 }, 00:20:35.754 { 00:20:35.754 "dma_device_id": "system", 00:20:35.754 "dma_device_type": 1 00:20:35.754 }, 00:20:35.754 { 00:20:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.754 "dma_device_type": 2 00:20:35.754 }, 00:20:35.754 { 00:20:35.754 "dma_device_id": "system", 00:20:35.754 "dma_device_type": 1 00:20:35.754 }, 00:20:35.754 { 00:20:35.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:35.754 "dma_device_type": 2 00:20:35.754 } 00:20:35.754 ], 00:20:35.754 "driver_specific": { 00:20:35.754 "raid": { 00:20:35.754 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:35.754 "strip_size_kb": 0, 00:20:35.754 "state": "online", 00:20:35.754 "raid_level": "raid1", 00:20:35.754 "superblock": true, 00:20:35.754 "num_base_bdevs": 3, 00:20:35.754 "num_base_bdevs_discovered": 3, 00:20:35.754 "num_base_bdevs_operational": 3, 00:20:35.754 "base_bdevs_list": [ 00:20:35.754 { 00:20:35.754 "name": "pt1", 00:20:35.754 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:35.754 "is_configured": true, 00:20:35.754 "data_offset": 2048, 00:20:35.754 "data_size": 63488 00:20:35.754 }, 00:20:35.754 { 00:20:35.754 "name": "pt2", 00:20:35.754 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:35.754 "is_configured": true, 00:20:35.754 "data_offset": 2048, 00:20:35.754 "data_size": 63488 00:20:35.754 }, 00:20:35.754 { 00:20:35.754 "name": "pt3", 00:20:35.754 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:35.754 "is_configured": true, 00:20:35.754 "data_offset": 2048, 00:20:35.754 "data_size": 63488 00:20:35.754 } 00:20:35.754 ] 00:20:35.754 } 00:20:35.754 } 00:20:35.754 }' 00:20:35.754 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:36.012 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:20:36.012 pt2 00:20:36.012 pt3' 00:20:36.012 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.012 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:20:36.012 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.012 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.012 "name": "pt1", 00:20:36.012 "aliases": [ 00:20:36.012 "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92" 00:20:36.012 ], 00:20:36.012 "product_name": "passthru", 00:20:36.012 "block_size": 512, 00:20:36.012 "num_blocks": 65536, 00:20:36.012 "uuid": "b3847fb2-3f4e-51c2-84b5-9c8a8dae1f92", 00:20:36.012 "assigned_rate_limits": { 00:20:36.012 "rw_ios_per_sec": 0, 00:20:36.012 "rw_mbytes_per_sec": 0, 00:20:36.012 "r_mbytes_per_sec": 0, 00:20:36.012 "w_mbytes_per_sec": 0 00:20:36.012 }, 00:20:36.012 "claimed": true, 00:20:36.012 "claim_type": "exclusive_write", 00:20:36.012 "zoned": false, 00:20:36.012 "supported_io_types": { 00:20:36.012 "read": true, 00:20:36.012 "write": true, 00:20:36.012 "unmap": true, 00:20:36.012 "write_zeroes": true, 00:20:36.012 "flush": true, 00:20:36.012 "reset": true, 00:20:36.012 "compare": false, 00:20:36.012 "compare_and_write": false, 00:20:36.012 "abort": true, 00:20:36.012 "nvme_admin": false, 00:20:36.012 "nvme_io": false 00:20:36.012 }, 00:20:36.012 "memory_domains": [ 00:20:36.012 { 00:20:36.012 "dma_device_id": "system", 00:20:36.012 "dma_device_type": 1 00:20:36.012 }, 00:20:36.012 { 00:20:36.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.012 "dma_device_type": 2 00:20:36.012 } 00:20:36.012 ], 00:20:36.012 "driver_specific": { 00:20:36.012 "passthru": { 00:20:36.012 "name": "pt1", 00:20:36.012 "base_bdev_name": "malloc1" 00:20:36.012 } 00:20:36.012 } 00:20:36.012 }' 00:20:36.012 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.270 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.270 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.270 12:02:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.270 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:36.270 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:36.270 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.270 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:36.528 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:36.528 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.528 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:36.528 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:36.528 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:36.528 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:36.528 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:20:36.786 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:36.786 "name": "pt2", 00:20:36.786 "aliases": [ 00:20:36.786 "b3e97558-8088-5b22-b883-4b9c7bc8cd62" 00:20:36.786 ], 00:20:36.786 "product_name": "passthru", 00:20:36.786 "block_size": 512, 00:20:36.786 "num_blocks": 65536, 00:20:36.786 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:36.786 "assigned_rate_limits": { 00:20:36.786 "rw_ios_per_sec": 0, 00:20:36.786 "rw_mbytes_per_sec": 0, 00:20:36.786 "r_mbytes_per_sec": 0, 00:20:36.786 "w_mbytes_per_sec": 0 00:20:36.786 }, 00:20:36.786 "claimed": true, 00:20:36.786 "claim_type": "exclusive_write", 00:20:36.786 "zoned": false, 00:20:36.786 "supported_io_types": { 00:20:36.786 "read": true, 00:20:36.786 "write": true, 00:20:36.786 "unmap": true, 00:20:36.786 "write_zeroes": true, 00:20:36.786 "flush": true, 00:20:36.786 "reset": true, 00:20:36.786 "compare": false, 00:20:36.786 "compare_and_write": false, 00:20:36.786 "abort": true, 00:20:36.786 "nvme_admin": false, 00:20:36.786 "nvme_io": false 00:20:36.787 }, 00:20:36.787 "memory_domains": [ 00:20:36.787 { 00:20:36.787 "dma_device_id": "system", 00:20:36.787 "dma_device_type": 1 00:20:36.787 }, 00:20:36.787 { 00:20:36.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:36.787 "dma_device_type": 2 00:20:36.787 } 00:20:36.787 ], 00:20:36.787 "driver_specific": { 00:20:36.787 "passthru": { 00:20:36.787 "name": "pt2", 00:20:36.787 "base_bdev_name": "malloc2" 00:20:36.787 } 00:20:36.787 } 00:20:36.787 }' 00:20:36.787 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.787 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:36.787 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:36.787 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.045 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.045 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:37.045 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.045 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.045 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:37.045 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.045 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.303 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:37.303 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:20:37.303 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:20:37.303 12:02:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:20:37.303 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:20:37.303 "name": "pt3", 00:20:37.303 "aliases": [ 00:20:37.303 "055a2605-dcc8-5947-810f-f468f11f85b0" 00:20:37.303 ], 00:20:37.303 "product_name": "passthru", 00:20:37.303 "block_size": 512, 00:20:37.303 "num_blocks": 65536, 00:20:37.303 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:37.303 "assigned_rate_limits": { 00:20:37.303 "rw_ios_per_sec": 0, 00:20:37.303 "rw_mbytes_per_sec": 0, 00:20:37.303 "r_mbytes_per_sec": 0, 00:20:37.303 "w_mbytes_per_sec": 0 00:20:37.303 }, 00:20:37.303 "claimed": true, 00:20:37.303 "claim_type": "exclusive_write", 00:20:37.303 "zoned": false, 00:20:37.303 "supported_io_types": { 00:20:37.303 "read": true, 00:20:37.303 "write": true, 00:20:37.303 "unmap": true, 00:20:37.303 "write_zeroes": true, 00:20:37.303 "flush": true, 00:20:37.303 "reset": true, 00:20:37.303 "compare": false, 00:20:37.303 "compare_and_write": false, 00:20:37.303 "abort": true, 00:20:37.303 "nvme_admin": false, 00:20:37.303 "nvme_io": false 00:20:37.303 }, 00:20:37.303 "memory_domains": [ 00:20:37.303 { 00:20:37.303 "dma_device_id": "system", 00:20:37.303 "dma_device_type": 1 00:20:37.303 }, 00:20:37.303 { 00:20:37.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:37.303 "dma_device_type": 2 00:20:37.303 } 00:20:37.303 ], 00:20:37.303 "driver_specific": { 00:20:37.303 "passthru": { 00:20:37.303 "name": "pt3", 00:20:37.303 "base_bdev_name": "malloc3" 00:20:37.303 } 00:20:37.303 } 00:20:37.303 }' 00:20:37.303 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:20:37.561 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.821 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:20:37.821 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:20:37.821 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:37.821 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:20:38.079 [2024-07-21 12:02:36.692788] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:38.079 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 57a1000b-3109-4ee3-aa36-8b02d2e53424 '!=' 57a1000b-3109-4ee3-aa36-8b02d2e53424 ']' 00:20:38.079 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:20:38.079 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:38.079 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:38.079 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:20:38.338 [2024-07-21 12:02:36.960697] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:38.338 12:02:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.596 12:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:38.596 "name": "raid_bdev1", 00:20:38.596 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:38.596 "strip_size_kb": 0, 00:20:38.596 "state": "online", 00:20:38.596 "raid_level": "raid1", 00:20:38.596 "superblock": true, 00:20:38.596 "num_base_bdevs": 3, 00:20:38.596 "num_base_bdevs_discovered": 2, 00:20:38.596 "num_base_bdevs_operational": 2, 00:20:38.596 "base_bdevs_list": [ 00:20:38.596 { 00:20:38.596 "name": null, 00:20:38.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.596 "is_configured": false, 00:20:38.597 "data_offset": 2048, 00:20:38.597 "data_size": 63488 00:20:38.597 }, 00:20:38.597 { 00:20:38.597 "name": "pt2", 00:20:38.597 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:38.597 "is_configured": true, 00:20:38.597 "data_offset": 2048, 00:20:38.597 "data_size": 63488 00:20:38.597 }, 00:20:38.597 { 00:20:38.597 "name": "pt3", 00:20:38.597 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:38.597 "is_configured": true, 00:20:38.597 "data_offset": 2048, 00:20:38.597 "data_size": 63488 00:20:38.597 } 00:20:38.597 ] 00:20:38.597 }' 00:20:38.597 12:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:38.597 12:02:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:39.164 12:02:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:39.423 [2024-07-21 12:02:38.096904] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:39.423 [2024-07-21 12:02:38.097136] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:39.423 [2024-07-21 12:02:38.097328] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:39.423 [2024-07-21 12:02:38.097511] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:39.423 [2024-07-21 12:02:38.097623] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:20:39.423 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.423 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:20:39.682 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:20:39.682 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:20:39.682 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:20:39.682 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:39.682 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:20:39.959 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:39.960 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:39.960 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:39.960 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:20:39.960 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:20:39.960 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:20:39.960 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:39.960 12:02:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:40.219 [2024-07-21 12:02:39.057042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:40.219 [2024-07-21 12:02:39.057372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:40.219 [2024-07-21 12:02:39.057589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:20:40.219 [2024-07-21 12:02:39.057748] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:40.219 [2024-07-21 12:02:39.060573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:40.219 [2024-07-21 12:02:39.060802] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:40.219 [2024-07-21 12:02:39.061045] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:40.219 [2024-07-21 12:02:39.061203] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:40.219 pt2 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.219 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.784 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:40.784 "name": "raid_bdev1", 00:20:40.784 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:40.784 "strip_size_kb": 0, 00:20:40.784 "state": "configuring", 00:20:40.784 "raid_level": "raid1", 00:20:40.784 "superblock": true, 00:20:40.784 "num_base_bdevs": 3, 00:20:40.784 "num_base_bdevs_discovered": 1, 00:20:40.784 "num_base_bdevs_operational": 2, 00:20:40.784 "base_bdevs_list": [ 00:20:40.784 { 00:20:40.784 "name": null, 00:20:40.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.784 "is_configured": false, 00:20:40.784 "data_offset": 2048, 00:20:40.784 "data_size": 63488 00:20:40.784 }, 00:20:40.784 { 00:20:40.784 "name": "pt2", 00:20:40.784 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:40.784 "is_configured": true, 00:20:40.784 "data_offset": 2048, 00:20:40.784 "data_size": 63488 00:20:40.784 }, 00:20:40.784 { 00:20:40.784 "name": null, 00:20:40.784 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:40.784 "is_configured": false, 00:20:40.784 "data_offset": 2048, 00:20:40.784 "data_size": 63488 00:20:40.784 } 00:20:40.784 ] 00:20:40.784 }' 00:20:40.784 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:40.784 12:02:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:41.349 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:20:41.349 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:20:41.349 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:20:41.349 12:02:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:41.606 [2024-07-21 12:02:40.233395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:41.606 [2024-07-21 12:02:40.233675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.606 [2024-07-21 12:02:40.233894] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:41.606 [2024-07-21 12:02:40.234041] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.606 [2024-07-21 12:02:40.234704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.606 [2024-07-21 12:02:40.234871] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:41.606 [2024-07-21 12:02:40.235136] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:41.606 [2024-07-21 12:02:40.235281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:41.606 [2024-07-21 12:02:40.235559] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:41.606 [2024-07-21 12:02:40.235688] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:41.606 [2024-07-21 12:02:40.235822] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:41.606 [2024-07-21 12:02:40.236328] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:41.606 [2024-07-21 12:02:40.236462] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:41.606 [2024-07-21 12:02:40.236683] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:41.606 pt3 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:41.606 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:41.864 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:41.864 "name": "raid_bdev1", 00:20:41.864 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:41.864 "strip_size_kb": 0, 00:20:41.864 "state": "online", 00:20:41.864 "raid_level": "raid1", 00:20:41.864 "superblock": true, 00:20:41.864 "num_base_bdevs": 3, 00:20:41.864 "num_base_bdevs_discovered": 2, 00:20:41.864 "num_base_bdevs_operational": 2, 00:20:41.864 "base_bdevs_list": [ 00:20:41.864 { 00:20:41.864 "name": null, 00:20:41.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:41.864 "is_configured": false, 00:20:41.864 "data_offset": 2048, 00:20:41.864 "data_size": 63488 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "name": "pt2", 00:20:41.864 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:41.864 "is_configured": true, 00:20:41.864 "data_offset": 2048, 00:20:41.864 "data_size": 63488 00:20:41.864 }, 00:20:41.864 { 00:20:41.864 "name": "pt3", 00:20:41.864 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:41.864 "is_configured": true, 00:20:41.864 "data_offset": 2048, 00:20:41.864 "data_size": 63488 00:20:41.864 } 00:20:41.864 ] 00:20:41.864 }' 00:20:41.864 12:02:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:41.864 12:02:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:42.429 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:42.687 [2024-07-21 12:02:41.455317] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:42.687 [2024-07-21 12:02:41.455507] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:42.687 [2024-07-21 12:02:41.455698] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:42.687 [2024-07-21 12:02:41.455822] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:42.687 [2024-07-21 12:02:41.455946] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:20:42.687 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:20:42.687 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.945 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:20:42.945 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:20:42.945 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:20:42.945 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:20:42.945 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:20:43.203 12:02:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:43.462 [2024-07-21 12:02:42.183466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:43.462 [2024-07-21 12:02:42.183792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.462 [2024-07-21 12:02:42.183955] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:43.462 [2024-07-21 12:02:42.184101] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.462 [2024-07-21 12:02:42.186733] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.462 [2024-07-21 12:02:42.186929] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:43.462 [2024-07-21 12:02:42.187167] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:43.462 [2024-07-21 12:02:42.187321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:43.462 [2024-07-21 12:02:42.187627] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:43.462 [2024-07-21 12:02:42.187783] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.462 [2024-07-21 12:02:42.187852] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:20:43.462 [2024-07-21 12:02:42.188054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:43.462 pt1 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.462 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:43.720 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:43.720 "name": "raid_bdev1", 00:20:43.720 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:43.720 "strip_size_kb": 0, 00:20:43.720 "state": "configuring", 00:20:43.720 "raid_level": "raid1", 00:20:43.720 "superblock": true, 00:20:43.720 "num_base_bdevs": 3, 00:20:43.720 "num_base_bdevs_discovered": 1, 00:20:43.720 "num_base_bdevs_operational": 2, 00:20:43.720 "base_bdevs_list": [ 00:20:43.720 { 00:20:43.720 "name": null, 00:20:43.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.720 "is_configured": false, 00:20:43.720 "data_offset": 2048, 00:20:43.720 "data_size": 63488 00:20:43.720 }, 00:20:43.720 { 00:20:43.720 "name": "pt2", 00:20:43.720 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:43.720 "is_configured": true, 00:20:43.720 "data_offset": 2048, 00:20:43.720 "data_size": 63488 00:20:43.720 }, 00:20:43.720 { 00:20:43.720 "name": null, 00:20:43.720 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:43.720 "is_configured": false, 00:20:43.720 "data_offset": 2048, 00:20:43.720 "data_size": 63488 00:20:43.720 } 00:20:43.720 ] 00:20:43.720 }' 00:20:43.720 12:02:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:43.720 12:02:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:44.286 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:20:44.286 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:44.544 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:20:44.544 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:20:44.802 [2024-07-21 12:02:43.579126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:20:44.802 [2024-07-21 12:02:43.579623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:44.802 [2024-07-21 12:02:43.579862] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:20:44.802 [2024-07-21 12:02:43.580038] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:44.802 [2024-07-21 12:02:43.580794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:44.802 [2024-07-21 12:02:43.581023] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:20:44.802 [2024-07-21 12:02:43.581301] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:20:44.802 [2024-07-21 12:02:43.581472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:20:44.802 [2024-07-21 12:02:43.581765] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:20:44.802 [2024-07-21 12:02:43.581933] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:44.802 [2024-07-21 12:02:43.582112] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:20:44.802 [2024-07-21 12:02:43.582714] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:20:44.802 [2024-07-21 12:02:43.582878] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:20:44.802 [2024-07-21 12:02:43.583236] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.802 pt3 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.802 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:45.061 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:45.061 "name": "raid_bdev1", 00:20:45.061 "uuid": "57a1000b-3109-4ee3-aa36-8b02d2e53424", 00:20:45.061 "strip_size_kb": 0, 00:20:45.061 "state": "online", 00:20:45.061 "raid_level": "raid1", 00:20:45.061 "superblock": true, 00:20:45.061 "num_base_bdevs": 3, 00:20:45.061 "num_base_bdevs_discovered": 2, 00:20:45.061 "num_base_bdevs_operational": 2, 00:20:45.061 "base_bdevs_list": [ 00:20:45.061 { 00:20:45.061 "name": null, 00:20:45.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.061 "is_configured": false, 00:20:45.061 "data_offset": 2048, 00:20:45.061 "data_size": 63488 00:20:45.061 }, 00:20:45.061 { 00:20:45.061 "name": "pt2", 00:20:45.061 "uuid": "b3e97558-8088-5b22-b883-4b9c7bc8cd62", 00:20:45.061 "is_configured": true, 00:20:45.061 "data_offset": 2048, 00:20:45.061 "data_size": 63488 00:20:45.061 }, 00:20:45.061 { 00:20:45.061 "name": "pt3", 00:20:45.061 "uuid": "055a2605-dcc8-5947-810f-f468f11f85b0", 00:20:45.061 "is_configured": true, 00:20:45.061 "data_offset": 2048, 00:20:45.061 "data_size": 63488 00:20:45.061 } 00:20:45.061 ] 00:20:45.061 }' 00:20:45.061 12:02:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:45.061 12:02:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:45.996 12:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:20:45.996 12:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:45.996 12:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:20:45.996 12:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:45.996 12:02:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:20:46.254 [2024-07-21 12:02:45.019711] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 57a1000b-3109-4ee3-aa36-8b02d2e53424 '!=' 57a1000b-3109-4ee3-aa36-8b02d2e53424 ']' 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 143486 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 143486 ']' 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 143486 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 143486 00:20:46.254 killing process with pid 143486 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 143486' 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 143486 00:20:46.254 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 143486 00:20:46.254 [2024-07-21 12:02:45.066913] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:46.255 [2024-07-21 12:02:45.067042] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.255 [2024-07-21 12:02:45.067117] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.255 [2024-07-21 12:02:45.067129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:20:46.255 [2024-07-21 12:02:45.100203] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:46.513 12:02:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:20:46.513 00:20:46.513 real 0m23.438s 00:20:46.513 user 0m44.439s 00:20:46.513 sys 0m2.721s 00:20:46.513 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:46.513 12:02:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.513 ************************************ 00:20:46.513 END TEST raid_superblock_test 00:20:46.513 ************************************ 00:20:46.772 12:02:45 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:20:46.772 12:02:45 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:46.772 12:02:45 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:46.772 12:02:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:46.772 ************************************ 00:20:46.772 START TEST raid_read_error_test 00:20:46.772 ************************************ 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 read 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.45Xop0t445 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=144240 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 144240 /var/tmp/spdk-raid.sock 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 144240 ']' 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:46.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:46.772 12:02:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:46.772 [2024-07-21 12:02:45.501408] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:46.772 [2024-07-21 12:02:45.503256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144240 ] 00:20:47.046 [2024-07-21 12:02:45.670511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.046 [2024-07-21 12:02:45.768170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.046 [2024-07-21 12:02:45.826155] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:47.992 12:02:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:47.992 12:02:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:20:47.992 12:02:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:47.992 12:02:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:47.992 BaseBdev1_malloc 00:20:47.992 12:02:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:48.250 true 00:20:48.250 12:02:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:48.509 [2024-07-21 12:02:47.179941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:48.509 [2024-07-21 12:02:47.180402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.509 [2024-07-21 12:02:47.180651] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:48.509 [2024-07-21 12:02:47.180832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.509 [2024-07-21 12:02:47.183739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.509 [2024-07-21 12:02:47.183922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:48.509 BaseBdev1 00:20:48.509 12:02:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:48.509 12:02:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:48.767 BaseBdev2_malloc 00:20:48.767 12:02:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:49.026 true 00:20:49.026 12:02:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:49.284 [2024-07-21 12:02:47.987550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:49.284 [2024-07-21 12:02:47.987940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.284 [2024-07-21 12:02:47.988180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:49.284 [2024-07-21 12:02:47.988350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.284 [2024-07-21 12:02:47.991128] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.284 [2024-07-21 12:02:47.991341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:49.284 BaseBdev2 00:20:49.284 12:02:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:49.284 12:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:49.542 BaseBdev3_malloc 00:20:49.542 12:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:49.800 true 00:20:49.800 12:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:50.058 [2024-07-21 12:02:48.776292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:50.058 [2024-07-21 12:02:48.776626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.059 [2024-07-21 12:02:48.776837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:50.059 [2024-07-21 12:02:48.777014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.059 [2024-07-21 12:02:48.779817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.059 [2024-07-21 12:02:48.780001] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:50.059 BaseBdev3 00:20:50.059 12:02:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:50.316 [2024-07-21 12:02:49.040584] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:50.316 [2024-07-21 12:02:49.043075] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:50.316 [2024-07-21 12:02:49.043322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:50.316 [2024-07-21 12:02:49.043737] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:50.316 [2024-07-21 12:02:49.043878] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:50.316 [2024-07-21 12:02:49.044120] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:50.316 [2024-07-21 12:02:49.044783] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:50.316 [2024-07-21 12:02:49.044931] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:50.316 [2024-07-21 12:02:49.045293] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.316 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.586 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:50.586 "name": "raid_bdev1", 00:20:50.586 "uuid": "ddd76a74-c953-4dee-9d01-baa4cfc64674", 00:20:50.586 "strip_size_kb": 0, 00:20:50.586 "state": "online", 00:20:50.586 "raid_level": "raid1", 00:20:50.586 "superblock": true, 00:20:50.586 "num_base_bdevs": 3, 00:20:50.586 "num_base_bdevs_discovered": 3, 00:20:50.586 "num_base_bdevs_operational": 3, 00:20:50.586 "base_bdevs_list": [ 00:20:50.586 { 00:20:50.586 "name": "BaseBdev1", 00:20:50.586 "uuid": "ee73606f-381e-502b-af37-8ed37a9b7d8c", 00:20:50.586 "is_configured": true, 00:20:50.586 "data_offset": 2048, 00:20:50.586 "data_size": 63488 00:20:50.586 }, 00:20:50.586 { 00:20:50.586 "name": "BaseBdev2", 00:20:50.586 "uuid": "195bf5ab-321c-5fdd-bac3-a1aac20700b3", 00:20:50.586 "is_configured": true, 00:20:50.586 "data_offset": 2048, 00:20:50.586 "data_size": 63488 00:20:50.586 }, 00:20:50.586 { 00:20:50.586 "name": "BaseBdev3", 00:20:50.586 "uuid": "e2a76c29-15db-57bb-97ae-4a87b0e462d7", 00:20:50.586 "is_configured": true, 00:20:50.586 "data_offset": 2048, 00:20:50.586 "data_size": 63488 00:20:50.586 } 00:20:50.586 ] 00:20:50.586 }' 00:20:50.586 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:50.586 12:02:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.150 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:51.150 12:02:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:51.150 [2024-07-21 12:02:49.969929] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:52.081 12:02:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.339 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.597 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:52.597 "name": "raid_bdev1", 00:20:52.597 "uuid": "ddd76a74-c953-4dee-9d01-baa4cfc64674", 00:20:52.597 "strip_size_kb": 0, 00:20:52.597 "state": "online", 00:20:52.597 "raid_level": "raid1", 00:20:52.597 "superblock": true, 00:20:52.597 "num_base_bdevs": 3, 00:20:52.597 "num_base_bdevs_discovered": 3, 00:20:52.597 "num_base_bdevs_operational": 3, 00:20:52.597 "base_bdevs_list": [ 00:20:52.597 { 00:20:52.597 "name": "BaseBdev1", 00:20:52.597 "uuid": "ee73606f-381e-502b-af37-8ed37a9b7d8c", 00:20:52.597 "is_configured": true, 00:20:52.597 "data_offset": 2048, 00:20:52.597 "data_size": 63488 00:20:52.597 }, 00:20:52.597 { 00:20:52.597 "name": "BaseBdev2", 00:20:52.597 "uuid": "195bf5ab-321c-5fdd-bac3-a1aac20700b3", 00:20:52.597 "is_configured": true, 00:20:52.597 "data_offset": 2048, 00:20:52.597 "data_size": 63488 00:20:52.597 }, 00:20:52.597 { 00:20:52.597 "name": "BaseBdev3", 00:20:52.597 "uuid": "e2a76c29-15db-57bb-97ae-4a87b0e462d7", 00:20:52.597 "is_configured": true, 00:20:52.597 "data_offset": 2048, 00:20:52.597 "data_size": 63488 00:20:52.597 } 00:20:52.597 ] 00:20:52.597 }' 00:20:52.597 12:02:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:52.597 12:02:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:53.531 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:53.531 [2024-07-21 12:02:52.371605] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:53.531 [2024-07-21 12:02:52.371869] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:53.531 [2024-07-21 12:02:52.375037] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.531 [2024-07-21 12:02:52.375234] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:53.531 [2024-07-21 12:02:52.375540] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.531 [2024-07-21 12:02:52.375681] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:20:53.531 0 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 144240 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 144240 ']' 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 144240 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144240 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144240' 00:20:53.790 killing process with pid 144240 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 144240 00:20:53.790 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 144240 00:20:53.790 [2024-07-21 12:02:52.424300] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:53.790 [2024-07-21 12:02:52.451122] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.45Xop0t445 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:20:54.048 00:20:54.048 real 0m7.286s 00:20:54.048 user 0m11.958s 00:20:54.048 sys 0m0.812s 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:54.048 12:02:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.048 ************************************ 00:20:54.048 END TEST raid_read_error_test 00:20:54.048 ************************************ 00:20:54.048 12:02:52 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:20:54.048 12:02:52 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:54.048 12:02:52 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:54.048 12:02:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:54.048 ************************************ 00:20:54.048 START TEST raid_write_error_test 00:20:54.048 ************************************ 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 3 write 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.sq9yvZSz7S 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=144435 00:20:54.048 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 144435 /var/tmp/spdk-raid.sock 00:20:54.049 12:02:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:20:54.049 12:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 144435 ']' 00:20:54.049 12:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:54.049 12:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:54.049 12:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:54.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:54.049 12:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:54.049 12:02:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:54.049 [2024-07-21 12:02:52.832519] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:54.049 [2024-07-21 12:02:52.833010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144435 ] 00:20:54.307 [2024-07-21 12:02:52.986710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.307 [2024-07-21 12:02:53.074119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.307 [2024-07-21 12:02:53.129453] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.241 12:02:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:55.241 12:02:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:20:55.241 12:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:55.241 12:02:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:55.499 BaseBdev1_malloc 00:20:55.499 12:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:20:55.499 true 00:20:55.499 12:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:20:55.757 [2024-07-21 12:02:54.547140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:20:55.757 [2024-07-21 12:02:54.547587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.757 [2024-07-21 12:02:54.547772] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:55.757 [2024-07-21 12:02:54.547942] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.757 [2024-07-21 12:02:54.550824] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.757 [2024-07-21 12:02:54.551022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:55.757 BaseBdev1 00:20:55.757 12:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:55.757 12:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:56.015 BaseBdev2_malloc 00:20:56.015 12:02:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:20:56.272 true 00:20:56.272 12:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:20:56.531 [2024-07-21 12:02:55.342362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:20:56.531 [2024-07-21 12:02:55.342815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:56.531 [2024-07-21 12:02:55.343021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:20:56.531 [2024-07-21 12:02:55.343194] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:56.531 [2024-07-21 12:02:55.345837] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:56.531 [2024-07-21 12:02:55.346034] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:56.531 BaseBdev2 00:20:56.531 12:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:20:56.531 12:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:56.789 BaseBdev3_malloc 00:20:56.789 12:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:20:57.047 true 00:20:57.047 12:02:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:20:57.306 [2024-07-21 12:02:56.090469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:20:57.306 [2024-07-21 12:02:56.090831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:57.306 [2024-07-21 12:02:56.091011] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:57.306 [2024-07-21 12:02:56.091203] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:57.306 [2024-07-21 12:02:56.093809] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:57.306 [2024-07-21 12:02:56.094011] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:57.306 BaseBdev3 00:20:57.306 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:20:57.564 [2024-07-21 12:02:56.314658] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:57.564 [2024-07-21 12:02:56.317158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:57.564 [2024-07-21 12:02:56.317409] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:57.564 [2024-07-21 12:02:56.317812] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:57.564 [2024-07-21 12:02:56.317948] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:57.564 [2024-07-21 12:02:56.318207] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:57.564 [2024-07-21 12:02:56.318820] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:57.564 [2024-07-21 12:02:56.318955] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:57.564 [2024-07-21 12:02:56.319272] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.564 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.822 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:20:57.822 "name": "raid_bdev1", 00:20:57.822 "uuid": "dc650cb5-e5c5-4701-b510-734635bc2a4e", 00:20:57.822 "strip_size_kb": 0, 00:20:57.822 "state": "online", 00:20:57.822 "raid_level": "raid1", 00:20:57.822 "superblock": true, 00:20:57.822 "num_base_bdevs": 3, 00:20:57.822 "num_base_bdevs_discovered": 3, 00:20:57.822 "num_base_bdevs_operational": 3, 00:20:57.822 "base_bdevs_list": [ 00:20:57.822 { 00:20:57.822 "name": "BaseBdev1", 00:20:57.822 "uuid": "54627e44-e8f3-5f50-9ba3-546df7914ef8", 00:20:57.822 "is_configured": true, 00:20:57.822 "data_offset": 2048, 00:20:57.822 "data_size": 63488 00:20:57.822 }, 00:20:57.822 { 00:20:57.822 "name": "BaseBdev2", 00:20:57.822 "uuid": "35b27993-a600-50c0-862e-0df21fe3c870", 00:20:57.822 "is_configured": true, 00:20:57.822 "data_offset": 2048, 00:20:57.822 "data_size": 63488 00:20:57.822 }, 00:20:57.822 { 00:20:57.822 "name": "BaseBdev3", 00:20:57.822 "uuid": "dfb3a9d5-3e82-5910-8c79-c068440481f6", 00:20:57.822 "is_configured": true, 00:20:57.822 "data_offset": 2048, 00:20:57.822 "data_size": 63488 00:20:57.822 } 00:20:57.822 ] 00:20:57.822 }' 00:20:57.822 12:02:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:20:57.822 12:02:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:20:58.388 12:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:20:58.388 12:02:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:58.646 [2024-07-21 12:02:57.319896] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:20:59.606 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:20:59.864 [2024-07-21 12:02:58.487597] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:20:59.864 [2024-07-21 12:02:58.487945] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:59.864 [2024-07-21 12:02:58.488282] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ad0 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.864 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.122 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:00.122 "name": "raid_bdev1", 00:21:00.122 "uuid": "dc650cb5-e5c5-4701-b510-734635bc2a4e", 00:21:00.122 "strip_size_kb": 0, 00:21:00.122 "state": "online", 00:21:00.122 "raid_level": "raid1", 00:21:00.122 "superblock": true, 00:21:00.122 "num_base_bdevs": 3, 00:21:00.122 "num_base_bdevs_discovered": 2, 00:21:00.122 "num_base_bdevs_operational": 2, 00:21:00.122 "base_bdevs_list": [ 00:21:00.122 { 00:21:00.122 "name": null, 00:21:00.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:00.122 "is_configured": false, 00:21:00.122 "data_offset": 2048, 00:21:00.122 "data_size": 63488 00:21:00.122 }, 00:21:00.122 { 00:21:00.122 "name": "BaseBdev2", 00:21:00.122 "uuid": "35b27993-a600-50c0-862e-0df21fe3c870", 00:21:00.122 "is_configured": true, 00:21:00.122 "data_offset": 2048, 00:21:00.122 "data_size": 63488 00:21:00.122 }, 00:21:00.122 { 00:21:00.122 "name": "BaseBdev3", 00:21:00.122 "uuid": "dfb3a9d5-3e82-5910-8c79-c068440481f6", 00:21:00.122 "is_configured": true, 00:21:00.122 "data_offset": 2048, 00:21:00.122 "data_size": 63488 00:21:00.122 } 00:21:00.122 ] 00:21:00.122 }' 00:21:00.122 12:02:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:00.122 12:02:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:00.689 12:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:00.947 [2024-07-21 12:02:59.666428] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:00.947 [2024-07-21 12:02:59.666798] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:00.947 [2024-07-21 12:02:59.669715] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:00.947 [2024-07-21 12:02:59.669932] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:00.947 [2024-07-21 12:02:59.670062] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:00.947 [2024-07-21 12:02:59.670199] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:00.947 0 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 144435 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 144435 ']' 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 144435 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144435 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144435' 00:21:00.947 killing process with pid 144435 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 144435 00:21:00.947 12:02:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 144435 00:21:00.947 [2024-07-21 12:02:59.727174] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:00.947 [2024-07-21 12:02:59.751726] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.sq9yvZSz7S 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:21:01.205 00:21:01.205 real 0m7.256s 00:21:01.205 user 0m11.719s 00:21:01.205 sys 0m0.997s 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:01.205 12:03:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.205 ************************************ 00:21:01.205 END TEST raid_write_error_test 00:21:01.205 ************************************ 00:21:01.462 12:03:00 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:21:01.462 12:03:00 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:21:01.462 12:03:00 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:21:01.462 12:03:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:01.462 12:03:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:01.462 12:03:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:01.462 ************************************ 00:21:01.462 START TEST raid_state_function_test 00:21:01.462 ************************************ 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 false 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:01.462 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=144628 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:01.463 Process raid pid: 144628 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 144628' 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 144628 /var/tmp/spdk-raid.sock 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 144628 ']' 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:01.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:01.463 12:03:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:01.463 [2024-07-21 12:03:00.161279] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:01.463 [2024-07-21 12:03:00.161759] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.463 [2024-07-21 12:03:00.324283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.720 [2024-07-21 12:03:00.420987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.720 [2024-07-21 12:03:00.475952] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:02.284 12:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:02.284 12:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:21:02.284 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:02.541 [2024-07-21 12:03:01.388618] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:02.541 [2024-07-21 12:03:01.389076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:02.541 [2024-07-21 12:03:01.389213] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:02.541 [2024-07-21 12:03:01.389282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:02.541 [2024-07-21 12:03:01.389492] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:02.541 [2024-07-21 12:03:01.389584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:02.541 [2024-07-21 12:03:01.389763] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:02.541 [2024-07-21 12:03:01.389840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:02.798 "name": "Existed_Raid", 00:21:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.798 "strip_size_kb": 64, 00:21:02.798 "state": "configuring", 00:21:02.798 "raid_level": "raid0", 00:21:02.798 "superblock": false, 00:21:02.798 "num_base_bdevs": 4, 00:21:02.798 "num_base_bdevs_discovered": 0, 00:21:02.798 "num_base_bdevs_operational": 4, 00:21:02.798 "base_bdevs_list": [ 00:21:02.798 { 00:21:02.798 "name": "BaseBdev1", 00:21:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.798 "is_configured": false, 00:21:02.798 "data_offset": 0, 00:21:02.798 "data_size": 0 00:21:02.798 }, 00:21:02.798 { 00:21:02.798 "name": "BaseBdev2", 00:21:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.798 "is_configured": false, 00:21:02.798 "data_offset": 0, 00:21:02.798 "data_size": 0 00:21:02.798 }, 00:21:02.798 { 00:21:02.798 "name": "BaseBdev3", 00:21:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.798 "is_configured": false, 00:21:02.798 "data_offset": 0, 00:21:02.798 "data_size": 0 00:21:02.798 }, 00:21:02.798 { 00:21:02.798 "name": "BaseBdev4", 00:21:02.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.798 "is_configured": false, 00:21:02.798 "data_offset": 0, 00:21:02.798 "data_size": 0 00:21:02.798 } 00:21:02.798 ] 00:21:02.798 }' 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:02.798 12:03:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:03.731 12:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:03.731 [2024-07-21 12:03:02.568774] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:03.731 [2024-07-21 12:03:02.569023] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:21:03.731 12:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:03.989 [2024-07-21 12:03:02.836849] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:03.989 [2024-07-21 12:03:02.837219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:03.989 [2024-07-21 12:03:02.837356] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:03.989 [2024-07-21 12:03:02.837461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:03.989 [2024-07-21 12:03:02.837583] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:03.989 [2024-07-21 12:03:02.837725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:03.989 [2024-07-21 12:03:02.837839] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:03.989 [2024-07-21 12:03:02.837908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:04.247 12:03:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:04.504 [2024-07-21 12:03:03.115979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:04.504 BaseBdev1 00:21:04.504 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:04.504 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:04.504 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:04.504 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:04.504 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:04.504 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:04.504 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:04.761 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:04.761 [ 00:21:04.761 { 00:21:04.761 "name": "BaseBdev1", 00:21:04.761 "aliases": [ 00:21:04.761 "e179ad21-6e80-4af1-8f7f-2640b01541a7" 00:21:04.761 ], 00:21:04.761 "product_name": "Malloc disk", 00:21:04.761 "block_size": 512, 00:21:04.761 "num_blocks": 65536, 00:21:04.761 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:04.761 "assigned_rate_limits": { 00:21:04.761 "rw_ios_per_sec": 0, 00:21:04.761 "rw_mbytes_per_sec": 0, 00:21:04.761 "r_mbytes_per_sec": 0, 00:21:04.761 "w_mbytes_per_sec": 0 00:21:04.761 }, 00:21:04.761 "claimed": true, 00:21:04.761 "claim_type": "exclusive_write", 00:21:04.761 "zoned": false, 00:21:04.761 "supported_io_types": { 00:21:04.761 "read": true, 00:21:04.761 "write": true, 00:21:04.761 "unmap": true, 00:21:04.761 "write_zeroes": true, 00:21:04.761 "flush": true, 00:21:04.761 "reset": true, 00:21:04.761 "compare": false, 00:21:04.761 "compare_and_write": false, 00:21:04.761 "abort": true, 00:21:04.761 "nvme_admin": false, 00:21:04.761 "nvme_io": false 00:21:04.761 }, 00:21:04.761 "memory_domains": [ 00:21:04.761 { 00:21:04.761 "dma_device_id": "system", 00:21:04.761 "dma_device_type": 1 00:21:04.761 }, 00:21:04.761 { 00:21:04.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:04.761 "dma_device_type": 2 00:21:04.761 } 00:21:04.761 ], 00:21:04.761 "driver_specific": {} 00:21:04.761 } 00:21:04.761 ] 00:21:04.761 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:04.761 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:04.761 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:04.761 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:04.762 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:05.019 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:05.019 "name": "Existed_Raid", 00:21:05.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.019 "strip_size_kb": 64, 00:21:05.020 "state": "configuring", 00:21:05.020 "raid_level": "raid0", 00:21:05.020 "superblock": false, 00:21:05.020 "num_base_bdevs": 4, 00:21:05.020 "num_base_bdevs_discovered": 1, 00:21:05.020 "num_base_bdevs_operational": 4, 00:21:05.020 "base_bdevs_list": [ 00:21:05.020 { 00:21:05.020 "name": "BaseBdev1", 00:21:05.020 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:05.020 "is_configured": true, 00:21:05.020 "data_offset": 0, 00:21:05.020 "data_size": 65536 00:21:05.020 }, 00:21:05.020 { 00:21:05.020 "name": "BaseBdev2", 00:21:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.020 "is_configured": false, 00:21:05.020 "data_offset": 0, 00:21:05.020 "data_size": 0 00:21:05.020 }, 00:21:05.020 { 00:21:05.020 "name": "BaseBdev3", 00:21:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.020 "is_configured": false, 00:21:05.020 "data_offset": 0, 00:21:05.020 "data_size": 0 00:21:05.020 }, 00:21:05.020 { 00:21:05.020 "name": "BaseBdev4", 00:21:05.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:05.020 "is_configured": false, 00:21:05.020 "data_offset": 0, 00:21:05.020 "data_size": 0 00:21:05.020 } 00:21:05.020 ] 00:21:05.020 }' 00:21:05.020 12:03:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:05.020 12:03:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:05.953 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:05.953 [2024-07-21 12:03:04.728410] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:05.953 [2024-07-21 12:03:04.728828] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:05.953 12:03:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:06.212 [2024-07-21 12:03:05.044533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:06.212 [2024-07-21 12:03:05.046926] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:06.212 [2024-07-21 12:03:05.047147] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:06.212 [2024-07-21 12:03:05.047313] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:06.212 [2024-07-21 12:03:05.047457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:06.212 [2024-07-21 12:03:05.047581] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:06.212 [2024-07-21 12:03:05.047741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.212 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:06.778 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:06.778 "name": "Existed_Raid", 00:21:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.778 "strip_size_kb": 64, 00:21:06.778 "state": "configuring", 00:21:06.778 "raid_level": "raid0", 00:21:06.778 "superblock": false, 00:21:06.778 "num_base_bdevs": 4, 00:21:06.778 "num_base_bdevs_discovered": 1, 00:21:06.778 "num_base_bdevs_operational": 4, 00:21:06.778 "base_bdevs_list": [ 00:21:06.778 { 00:21:06.778 "name": "BaseBdev1", 00:21:06.778 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:06.778 "is_configured": true, 00:21:06.778 "data_offset": 0, 00:21:06.778 "data_size": 65536 00:21:06.778 }, 00:21:06.778 { 00:21:06.778 "name": "BaseBdev2", 00:21:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.778 "is_configured": false, 00:21:06.778 "data_offset": 0, 00:21:06.778 "data_size": 0 00:21:06.778 }, 00:21:06.778 { 00:21:06.778 "name": "BaseBdev3", 00:21:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.778 "is_configured": false, 00:21:06.778 "data_offset": 0, 00:21:06.778 "data_size": 0 00:21:06.778 }, 00:21:06.778 { 00:21:06.778 "name": "BaseBdev4", 00:21:06.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.778 "is_configured": false, 00:21:06.778 "data_offset": 0, 00:21:06.778 "data_size": 0 00:21:06.778 } 00:21:06.778 ] 00:21:06.778 }' 00:21:06.778 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:06.778 12:03:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:07.341 12:03:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:07.342 [2024-07-21 12:03:06.186171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:07.342 BaseBdev2 00:21:07.342 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:07.342 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:07.342 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:07.342 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:07.342 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:07.342 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:07.342 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:07.915 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:07.915 [ 00:21:07.915 { 00:21:07.915 "name": "BaseBdev2", 00:21:07.915 "aliases": [ 00:21:07.915 "65ffc59a-5d96-4f52-8f70-499c546ad6d5" 00:21:07.915 ], 00:21:07.915 "product_name": "Malloc disk", 00:21:07.915 "block_size": 512, 00:21:07.915 "num_blocks": 65536, 00:21:07.915 "uuid": "65ffc59a-5d96-4f52-8f70-499c546ad6d5", 00:21:07.915 "assigned_rate_limits": { 00:21:07.915 "rw_ios_per_sec": 0, 00:21:07.915 "rw_mbytes_per_sec": 0, 00:21:07.915 "r_mbytes_per_sec": 0, 00:21:07.915 "w_mbytes_per_sec": 0 00:21:07.915 }, 00:21:07.915 "claimed": true, 00:21:07.915 "claim_type": "exclusive_write", 00:21:07.915 "zoned": false, 00:21:07.915 "supported_io_types": { 00:21:07.915 "read": true, 00:21:07.915 "write": true, 00:21:07.915 "unmap": true, 00:21:07.915 "write_zeroes": true, 00:21:07.915 "flush": true, 00:21:07.915 "reset": true, 00:21:07.915 "compare": false, 00:21:07.915 "compare_and_write": false, 00:21:07.915 "abort": true, 00:21:07.915 "nvme_admin": false, 00:21:07.915 "nvme_io": false 00:21:07.915 }, 00:21:07.915 "memory_domains": [ 00:21:07.915 { 00:21:07.915 "dma_device_id": "system", 00:21:07.915 "dma_device_type": 1 00:21:07.915 }, 00:21:07.915 { 00:21:07.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:07.916 "dma_device_type": 2 00:21:07.916 } 00:21:07.916 ], 00:21:07.916 "driver_specific": {} 00:21:07.916 } 00:21:07.916 ] 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.916 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:08.193 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:08.193 "name": "Existed_Raid", 00:21:08.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.193 "strip_size_kb": 64, 00:21:08.193 "state": "configuring", 00:21:08.193 "raid_level": "raid0", 00:21:08.193 "superblock": false, 00:21:08.193 "num_base_bdevs": 4, 00:21:08.193 "num_base_bdevs_discovered": 2, 00:21:08.193 "num_base_bdevs_operational": 4, 00:21:08.193 "base_bdevs_list": [ 00:21:08.193 { 00:21:08.193 "name": "BaseBdev1", 00:21:08.193 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:08.193 "is_configured": true, 00:21:08.193 "data_offset": 0, 00:21:08.193 "data_size": 65536 00:21:08.193 }, 00:21:08.193 { 00:21:08.193 "name": "BaseBdev2", 00:21:08.193 "uuid": "65ffc59a-5d96-4f52-8f70-499c546ad6d5", 00:21:08.193 "is_configured": true, 00:21:08.193 "data_offset": 0, 00:21:08.193 "data_size": 65536 00:21:08.193 }, 00:21:08.193 { 00:21:08.193 "name": "BaseBdev3", 00:21:08.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.193 "is_configured": false, 00:21:08.193 "data_offset": 0, 00:21:08.193 "data_size": 0 00:21:08.193 }, 00:21:08.193 { 00:21:08.193 "name": "BaseBdev4", 00:21:08.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.193 "is_configured": false, 00:21:08.193 "data_offset": 0, 00:21:08.193 "data_size": 0 00:21:08.193 } 00:21:08.193 ] 00:21:08.193 }' 00:21:08.193 12:03:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:08.193 12:03:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:08.757 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:09.015 [2024-07-21 12:03:07.799390] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:09.015 BaseBdev3 00:21:09.015 12:03:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:09.015 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:09.015 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:09.015 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:09.015 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:09.015 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:09.015 12:03:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:09.274 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:09.531 [ 00:21:09.531 { 00:21:09.531 "name": "BaseBdev3", 00:21:09.531 "aliases": [ 00:21:09.531 "fbab0a3e-2dd6-409b-b39a-454c36190d82" 00:21:09.531 ], 00:21:09.531 "product_name": "Malloc disk", 00:21:09.531 "block_size": 512, 00:21:09.531 "num_blocks": 65536, 00:21:09.532 "uuid": "fbab0a3e-2dd6-409b-b39a-454c36190d82", 00:21:09.532 "assigned_rate_limits": { 00:21:09.532 "rw_ios_per_sec": 0, 00:21:09.532 "rw_mbytes_per_sec": 0, 00:21:09.532 "r_mbytes_per_sec": 0, 00:21:09.532 "w_mbytes_per_sec": 0 00:21:09.532 }, 00:21:09.532 "claimed": true, 00:21:09.532 "claim_type": "exclusive_write", 00:21:09.532 "zoned": false, 00:21:09.532 "supported_io_types": { 00:21:09.532 "read": true, 00:21:09.532 "write": true, 00:21:09.532 "unmap": true, 00:21:09.532 "write_zeroes": true, 00:21:09.532 "flush": true, 00:21:09.532 "reset": true, 00:21:09.532 "compare": false, 00:21:09.532 "compare_and_write": false, 00:21:09.532 "abort": true, 00:21:09.532 "nvme_admin": false, 00:21:09.532 "nvme_io": false 00:21:09.532 }, 00:21:09.532 "memory_domains": [ 00:21:09.532 { 00:21:09.532 "dma_device_id": "system", 00:21:09.532 "dma_device_type": 1 00:21:09.532 }, 00:21:09.532 { 00:21:09.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:09.532 "dma_device_type": 2 00:21:09.532 } 00:21:09.532 ], 00:21:09.532 "driver_specific": {} 00:21:09.532 } 00:21:09.532 ] 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:09.532 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:09.790 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:09.790 "name": "Existed_Raid", 00:21:09.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.790 "strip_size_kb": 64, 00:21:09.790 "state": "configuring", 00:21:09.790 "raid_level": "raid0", 00:21:09.790 "superblock": false, 00:21:09.790 "num_base_bdevs": 4, 00:21:09.790 "num_base_bdevs_discovered": 3, 00:21:09.790 "num_base_bdevs_operational": 4, 00:21:09.790 "base_bdevs_list": [ 00:21:09.790 { 00:21:09.790 "name": "BaseBdev1", 00:21:09.790 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:09.790 "is_configured": true, 00:21:09.790 "data_offset": 0, 00:21:09.790 "data_size": 65536 00:21:09.790 }, 00:21:09.790 { 00:21:09.790 "name": "BaseBdev2", 00:21:09.790 "uuid": "65ffc59a-5d96-4f52-8f70-499c546ad6d5", 00:21:09.790 "is_configured": true, 00:21:09.790 "data_offset": 0, 00:21:09.790 "data_size": 65536 00:21:09.790 }, 00:21:09.790 { 00:21:09.790 "name": "BaseBdev3", 00:21:09.790 "uuid": "fbab0a3e-2dd6-409b-b39a-454c36190d82", 00:21:09.790 "is_configured": true, 00:21:09.790 "data_offset": 0, 00:21:09.790 "data_size": 65536 00:21:09.790 }, 00:21:09.790 { 00:21:09.790 "name": "BaseBdev4", 00:21:09.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.790 "is_configured": false, 00:21:09.790 "data_offset": 0, 00:21:09.790 "data_size": 0 00:21:09.790 } 00:21:09.790 ] 00:21:09.790 }' 00:21:09.790 12:03:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:09.790 12:03:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:10.357 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:10.616 [2024-07-21 12:03:09.441173] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:10.616 [2024-07-21 12:03:09.441448] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:21:10.616 [2024-07-21 12:03:09.441584] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:10.616 [2024-07-21 12:03:09.441819] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:10.616 [2024-07-21 12:03:09.442408] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:21:10.616 [2024-07-21 12:03:09.442550] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:21:10.616 [2024-07-21 12:03:09.442988] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.616 BaseBdev4 00:21:10.616 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:10.616 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:10.616 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:10.616 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:10.616 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:10.616 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:10.616 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:10.875 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:11.132 [ 00:21:11.132 { 00:21:11.132 "name": "BaseBdev4", 00:21:11.132 "aliases": [ 00:21:11.132 "75f5cd89-5064-4850-a378-d0b526cc336d" 00:21:11.132 ], 00:21:11.132 "product_name": "Malloc disk", 00:21:11.132 "block_size": 512, 00:21:11.132 "num_blocks": 65536, 00:21:11.132 "uuid": "75f5cd89-5064-4850-a378-d0b526cc336d", 00:21:11.132 "assigned_rate_limits": { 00:21:11.132 "rw_ios_per_sec": 0, 00:21:11.132 "rw_mbytes_per_sec": 0, 00:21:11.132 "r_mbytes_per_sec": 0, 00:21:11.132 "w_mbytes_per_sec": 0 00:21:11.132 }, 00:21:11.132 "claimed": true, 00:21:11.132 "claim_type": "exclusive_write", 00:21:11.132 "zoned": false, 00:21:11.132 "supported_io_types": { 00:21:11.132 "read": true, 00:21:11.132 "write": true, 00:21:11.132 "unmap": true, 00:21:11.132 "write_zeroes": true, 00:21:11.132 "flush": true, 00:21:11.132 "reset": true, 00:21:11.132 "compare": false, 00:21:11.132 "compare_and_write": false, 00:21:11.132 "abort": true, 00:21:11.132 "nvme_admin": false, 00:21:11.132 "nvme_io": false 00:21:11.132 }, 00:21:11.132 "memory_domains": [ 00:21:11.132 { 00:21:11.132 "dma_device_id": "system", 00:21:11.132 "dma_device_type": 1 00:21:11.132 }, 00:21:11.132 { 00:21:11.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:11.132 "dma_device_type": 2 00:21:11.132 } 00:21:11.132 ], 00:21:11.132 "driver_specific": {} 00:21:11.132 } 00:21:11.132 ] 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.132 12:03:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:11.390 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:11.390 "name": "Existed_Raid", 00:21:11.390 "uuid": "2c22695c-88d3-43c4-a6cf-ebed56b09380", 00:21:11.390 "strip_size_kb": 64, 00:21:11.390 "state": "online", 00:21:11.390 "raid_level": "raid0", 00:21:11.390 "superblock": false, 00:21:11.390 "num_base_bdevs": 4, 00:21:11.390 "num_base_bdevs_discovered": 4, 00:21:11.390 "num_base_bdevs_operational": 4, 00:21:11.390 "base_bdevs_list": [ 00:21:11.390 { 00:21:11.390 "name": "BaseBdev1", 00:21:11.390 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:11.390 "is_configured": true, 00:21:11.390 "data_offset": 0, 00:21:11.390 "data_size": 65536 00:21:11.390 }, 00:21:11.390 { 00:21:11.390 "name": "BaseBdev2", 00:21:11.390 "uuid": "65ffc59a-5d96-4f52-8f70-499c546ad6d5", 00:21:11.390 "is_configured": true, 00:21:11.390 "data_offset": 0, 00:21:11.390 "data_size": 65536 00:21:11.390 }, 00:21:11.390 { 00:21:11.390 "name": "BaseBdev3", 00:21:11.390 "uuid": "fbab0a3e-2dd6-409b-b39a-454c36190d82", 00:21:11.390 "is_configured": true, 00:21:11.390 "data_offset": 0, 00:21:11.390 "data_size": 65536 00:21:11.390 }, 00:21:11.390 { 00:21:11.390 "name": "BaseBdev4", 00:21:11.390 "uuid": "75f5cd89-5064-4850-a378-d0b526cc336d", 00:21:11.390 "is_configured": true, 00:21:11.390 "data_offset": 0, 00:21:11.390 "data_size": 65536 00:21:11.390 } 00:21:11.390 ] 00:21:11.390 }' 00:21:11.390 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:11.390 12:03:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:11.955 12:03:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:12.213 [2024-07-21 12:03:10.997885] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.213 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:12.213 "name": "Existed_Raid", 00:21:12.213 "aliases": [ 00:21:12.213 "2c22695c-88d3-43c4-a6cf-ebed56b09380" 00:21:12.213 ], 00:21:12.213 "product_name": "Raid Volume", 00:21:12.213 "block_size": 512, 00:21:12.213 "num_blocks": 262144, 00:21:12.213 "uuid": "2c22695c-88d3-43c4-a6cf-ebed56b09380", 00:21:12.213 "assigned_rate_limits": { 00:21:12.213 "rw_ios_per_sec": 0, 00:21:12.213 "rw_mbytes_per_sec": 0, 00:21:12.213 "r_mbytes_per_sec": 0, 00:21:12.213 "w_mbytes_per_sec": 0 00:21:12.213 }, 00:21:12.213 "claimed": false, 00:21:12.213 "zoned": false, 00:21:12.213 "supported_io_types": { 00:21:12.213 "read": true, 00:21:12.213 "write": true, 00:21:12.213 "unmap": true, 00:21:12.213 "write_zeroes": true, 00:21:12.213 "flush": true, 00:21:12.213 "reset": true, 00:21:12.213 "compare": false, 00:21:12.213 "compare_and_write": false, 00:21:12.213 "abort": false, 00:21:12.213 "nvme_admin": false, 00:21:12.213 "nvme_io": false 00:21:12.213 }, 00:21:12.213 "memory_domains": [ 00:21:12.213 { 00:21:12.213 "dma_device_id": "system", 00:21:12.213 "dma_device_type": 1 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.213 "dma_device_type": 2 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "dma_device_id": "system", 00:21:12.213 "dma_device_type": 1 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.213 "dma_device_type": 2 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "dma_device_id": "system", 00:21:12.213 "dma_device_type": 1 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.213 "dma_device_type": 2 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "dma_device_id": "system", 00:21:12.213 "dma_device_type": 1 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.213 "dma_device_type": 2 00:21:12.213 } 00:21:12.213 ], 00:21:12.213 "driver_specific": { 00:21:12.213 "raid": { 00:21:12.213 "uuid": "2c22695c-88d3-43c4-a6cf-ebed56b09380", 00:21:12.213 "strip_size_kb": 64, 00:21:12.213 "state": "online", 00:21:12.213 "raid_level": "raid0", 00:21:12.213 "superblock": false, 00:21:12.213 "num_base_bdevs": 4, 00:21:12.213 "num_base_bdevs_discovered": 4, 00:21:12.213 "num_base_bdevs_operational": 4, 00:21:12.213 "base_bdevs_list": [ 00:21:12.213 { 00:21:12.213 "name": "BaseBdev1", 00:21:12.213 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:12.213 "is_configured": true, 00:21:12.213 "data_offset": 0, 00:21:12.213 "data_size": 65536 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "name": "BaseBdev2", 00:21:12.213 "uuid": "65ffc59a-5d96-4f52-8f70-499c546ad6d5", 00:21:12.213 "is_configured": true, 00:21:12.213 "data_offset": 0, 00:21:12.213 "data_size": 65536 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "name": "BaseBdev3", 00:21:12.213 "uuid": "fbab0a3e-2dd6-409b-b39a-454c36190d82", 00:21:12.213 "is_configured": true, 00:21:12.213 "data_offset": 0, 00:21:12.213 "data_size": 65536 00:21:12.213 }, 00:21:12.213 { 00:21:12.213 "name": "BaseBdev4", 00:21:12.213 "uuid": "75f5cd89-5064-4850-a378-d0b526cc336d", 00:21:12.213 "is_configured": true, 00:21:12.213 "data_offset": 0, 00:21:12.213 "data_size": 65536 00:21:12.213 } 00:21:12.213 ] 00:21:12.213 } 00:21:12.213 } 00:21:12.213 }' 00:21:12.213 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:12.213 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:12.213 BaseBdev2 00:21:12.213 BaseBdev3 00:21:12.213 BaseBdev4' 00:21:12.213 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.213 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:12.213 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:12.471 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:12.471 "name": "BaseBdev1", 00:21:12.471 "aliases": [ 00:21:12.471 "e179ad21-6e80-4af1-8f7f-2640b01541a7" 00:21:12.471 ], 00:21:12.471 "product_name": "Malloc disk", 00:21:12.471 "block_size": 512, 00:21:12.471 "num_blocks": 65536, 00:21:12.471 "uuid": "e179ad21-6e80-4af1-8f7f-2640b01541a7", 00:21:12.471 "assigned_rate_limits": { 00:21:12.471 "rw_ios_per_sec": 0, 00:21:12.471 "rw_mbytes_per_sec": 0, 00:21:12.471 "r_mbytes_per_sec": 0, 00:21:12.471 "w_mbytes_per_sec": 0 00:21:12.471 }, 00:21:12.471 "claimed": true, 00:21:12.471 "claim_type": "exclusive_write", 00:21:12.471 "zoned": false, 00:21:12.471 "supported_io_types": { 00:21:12.471 "read": true, 00:21:12.471 "write": true, 00:21:12.471 "unmap": true, 00:21:12.471 "write_zeroes": true, 00:21:12.471 "flush": true, 00:21:12.471 "reset": true, 00:21:12.471 "compare": false, 00:21:12.471 "compare_and_write": false, 00:21:12.471 "abort": true, 00:21:12.471 "nvme_admin": false, 00:21:12.471 "nvme_io": false 00:21:12.471 }, 00:21:12.471 "memory_domains": [ 00:21:12.471 { 00:21:12.471 "dma_device_id": "system", 00:21:12.471 "dma_device_type": 1 00:21:12.471 }, 00:21:12.471 { 00:21:12.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:12.471 "dma_device_type": 2 00:21:12.471 } 00:21:12.471 ], 00:21:12.471 "driver_specific": {} 00:21:12.471 }' 00:21:12.471 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.728 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:12.728 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:12.728 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.728 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:12.728 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:12.728 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.728 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:12.986 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:12.986 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.986 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:12.986 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:12.986 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:12.986 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:12.986 12:03:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:13.244 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:13.244 "name": "BaseBdev2", 00:21:13.244 "aliases": [ 00:21:13.244 "65ffc59a-5d96-4f52-8f70-499c546ad6d5" 00:21:13.244 ], 00:21:13.244 "product_name": "Malloc disk", 00:21:13.244 "block_size": 512, 00:21:13.244 "num_blocks": 65536, 00:21:13.244 "uuid": "65ffc59a-5d96-4f52-8f70-499c546ad6d5", 00:21:13.244 "assigned_rate_limits": { 00:21:13.244 "rw_ios_per_sec": 0, 00:21:13.244 "rw_mbytes_per_sec": 0, 00:21:13.244 "r_mbytes_per_sec": 0, 00:21:13.244 "w_mbytes_per_sec": 0 00:21:13.244 }, 00:21:13.244 "claimed": true, 00:21:13.244 "claim_type": "exclusive_write", 00:21:13.244 "zoned": false, 00:21:13.244 "supported_io_types": { 00:21:13.244 "read": true, 00:21:13.244 "write": true, 00:21:13.244 "unmap": true, 00:21:13.244 "write_zeroes": true, 00:21:13.244 "flush": true, 00:21:13.244 "reset": true, 00:21:13.244 "compare": false, 00:21:13.244 "compare_and_write": false, 00:21:13.244 "abort": true, 00:21:13.244 "nvme_admin": false, 00:21:13.244 "nvme_io": false 00:21:13.244 }, 00:21:13.244 "memory_domains": [ 00:21:13.244 { 00:21:13.244 "dma_device_id": "system", 00:21:13.244 "dma_device_type": 1 00:21:13.244 }, 00:21:13.244 { 00:21:13.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:13.244 "dma_device_type": 2 00:21:13.244 } 00:21:13.244 ], 00:21:13.244 "driver_specific": {} 00:21:13.244 }' 00:21:13.244 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.244 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.502 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:13.760 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:13.760 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:13.760 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:13.760 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:14.018 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:14.018 "name": "BaseBdev3", 00:21:14.018 "aliases": [ 00:21:14.018 "fbab0a3e-2dd6-409b-b39a-454c36190d82" 00:21:14.018 ], 00:21:14.018 "product_name": "Malloc disk", 00:21:14.018 "block_size": 512, 00:21:14.018 "num_blocks": 65536, 00:21:14.018 "uuid": "fbab0a3e-2dd6-409b-b39a-454c36190d82", 00:21:14.018 "assigned_rate_limits": { 00:21:14.018 "rw_ios_per_sec": 0, 00:21:14.018 "rw_mbytes_per_sec": 0, 00:21:14.018 "r_mbytes_per_sec": 0, 00:21:14.018 "w_mbytes_per_sec": 0 00:21:14.018 }, 00:21:14.018 "claimed": true, 00:21:14.018 "claim_type": "exclusive_write", 00:21:14.018 "zoned": false, 00:21:14.018 "supported_io_types": { 00:21:14.018 "read": true, 00:21:14.018 "write": true, 00:21:14.018 "unmap": true, 00:21:14.018 "write_zeroes": true, 00:21:14.018 "flush": true, 00:21:14.018 "reset": true, 00:21:14.018 "compare": false, 00:21:14.018 "compare_and_write": false, 00:21:14.018 "abort": true, 00:21:14.018 "nvme_admin": false, 00:21:14.018 "nvme_io": false 00:21:14.018 }, 00:21:14.018 "memory_domains": [ 00:21:14.018 { 00:21:14.018 "dma_device_id": "system", 00:21:14.018 "dma_device_type": 1 00:21:14.018 }, 00:21:14.018 { 00:21:14.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.018 "dma_device_type": 2 00:21:14.018 } 00:21:14.018 ], 00:21:14.018 "driver_specific": {} 00:21:14.018 }' 00:21:14.018 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.018 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.018 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:14.018 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.018 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.018 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.287 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.287 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.287 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.287 12:03:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.287 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.287 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:14.287 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:14.287 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:14.287 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:14.549 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:14.549 "name": "BaseBdev4", 00:21:14.549 "aliases": [ 00:21:14.549 "75f5cd89-5064-4850-a378-d0b526cc336d" 00:21:14.549 ], 00:21:14.549 "product_name": "Malloc disk", 00:21:14.549 "block_size": 512, 00:21:14.549 "num_blocks": 65536, 00:21:14.549 "uuid": "75f5cd89-5064-4850-a378-d0b526cc336d", 00:21:14.549 "assigned_rate_limits": { 00:21:14.549 "rw_ios_per_sec": 0, 00:21:14.549 "rw_mbytes_per_sec": 0, 00:21:14.549 "r_mbytes_per_sec": 0, 00:21:14.549 "w_mbytes_per_sec": 0 00:21:14.549 }, 00:21:14.549 "claimed": true, 00:21:14.549 "claim_type": "exclusive_write", 00:21:14.549 "zoned": false, 00:21:14.549 "supported_io_types": { 00:21:14.549 "read": true, 00:21:14.549 "write": true, 00:21:14.549 "unmap": true, 00:21:14.549 "write_zeroes": true, 00:21:14.549 "flush": true, 00:21:14.549 "reset": true, 00:21:14.549 "compare": false, 00:21:14.549 "compare_and_write": false, 00:21:14.549 "abort": true, 00:21:14.549 "nvme_admin": false, 00:21:14.549 "nvme_io": false 00:21:14.549 }, 00:21:14.549 "memory_domains": [ 00:21:14.549 { 00:21:14.549 "dma_device_id": "system", 00:21:14.549 "dma_device_type": 1 00:21:14.549 }, 00:21:14.549 { 00:21:14.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:14.549 "dma_device_type": 2 00:21:14.549 } 00:21:14.549 ], 00:21:14.549 "driver_specific": {} 00:21:14.549 }' 00:21:14.549 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.549 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:14.549 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:14.549 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.818 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:14.818 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:14.818 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.818 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:14.818 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:14.818 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:14.818 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:15.077 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:15.077 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:15.334 [2024-07-21 12:03:13.962429] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:15.334 [2024-07-21 12:03:13.962858] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:15.334 [2024-07-21 12:03:13.963080] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.335 12:03:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:15.592 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:15.592 "name": "Existed_Raid", 00:21:15.592 "uuid": "2c22695c-88d3-43c4-a6cf-ebed56b09380", 00:21:15.592 "strip_size_kb": 64, 00:21:15.592 "state": "offline", 00:21:15.592 "raid_level": "raid0", 00:21:15.592 "superblock": false, 00:21:15.592 "num_base_bdevs": 4, 00:21:15.592 "num_base_bdevs_discovered": 3, 00:21:15.592 "num_base_bdevs_operational": 3, 00:21:15.592 "base_bdevs_list": [ 00:21:15.592 { 00:21:15.592 "name": null, 00:21:15.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.592 "is_configured": false, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "name": "BaseBdev2", 00:21:15.592 "uuid": "65ffc59a-5d96-4f52-8f70-499c546ad6d5", 00:21:15.592 "is_configured": true, 00:21:15.592 "data_offset": 0, 00:21:15.592 "data_size": 65536 00:21:15.592 }, 00:21:15.592 { 00:21:15.592 "name": "BaseBdev3", 00:21:15.592 "uuid": "fbab0a3e-2dd6-409b-b39a-454c36190d82", 00:21:15.593 "is_configured": true, 00:21:15.593 "data_offset": 0, 00:21:15.593 "data_size": 65536 00:21:15.593 }, 00:21:15.593 { 00:21:15.593 "name": "BaseBdev4", 00:21:15.593 "uuid": "75f5cd89-5064-4850-a378-d0b526cc336d", 00:21:15.593 "is_configured": true, 00:21:15.593 "data_offset": 0, 00:21:15.593 "data_size": 65536 00:21:15.593 } 00:21:15.593 ] 00:21:15.593 }' 00:21:15.593 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:15.593 12:03:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:16.158 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:16.158 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:16.158 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.158 12:03:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:16.416 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:16.416 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.416 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:16.674 [2024-07-21 12:03:15.443217] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:16.674 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:16.674 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:16.674 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.674 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:16.932 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:16.932 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:16.932 12:03:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:17.189 [2024-07-21 12:03:16.008781] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:17.189 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:17.189 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:17.189 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.189 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:17.446 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:17.446 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:17.446 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:17.705 [2024-07-21 12:03:16.561310] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:17.705 [2024-07-21 12:03:16.561569] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:21:17.962 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:17.962 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:17.962 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.962 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:18.220 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:18.220 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:18.220 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:18.220 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:18.220 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:18.220 12:03:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:18.479 BaseBdev2 00:21:18.479 12:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:18.479 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:18.479 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:18.479 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:18.479 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:18.479 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:18.479 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:18.737 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:18.995 [ 00:21:18.995 { 00:21:18.995 "name": "BaseBdev2", 00:21:18.995 "aliases": [ 00:21:18.995 "b19ae253-94a1-479d-9d8f-dee9fe190151" 00:21:18.995 ], 00:21:18.995 "product_name": "Malloc disk", 00:21:18.995 "block_size": 512, 00:21:18.995 "num_blocks": 65536, 00:21:18.995 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:18.995 "assigned_rate_limits": { 00:21:18.995 "rw_ios_per_sec": 0, 00:21:18.995 "rw_mbytes_per_sec": 0, 00:21:18.995 "r_mbytes_per_sec": 0, 00:21:18.995 "w_mbytes_per_sec": 0 00:21:18.995 }, 00:21:18.995 "claimed": false, 00:21:18.995 "zoned": false, 00:21:18.995 "supported_io_types": { 00:21:18.995 "read": true, 00:21:18.995 "write": true, 00:21:18.995 "unmap": true, 00:21:18.995 "write_zeroes": true, 00:21:18.995 "flush": true, 00:21:18.995 "reset": true, 00:21:18.995 "compare": false, 00:21:18.995 "compare_and_write": false, 00:21:18.995 "abort": true, 00:21:18.995 "nvme_admin": false, 00:21:18.995 "nvme_io": false 00:21:18.995 }, 00:21:18.995 "memory_domains": [ 00:21:18.995 { 00:21:18.995 "dma_device_id": "system", 00:21:18.995 "dma_device_type": 1 00:21:18.995 }, 00:21:18.995 { 00:21:18.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:18.995 "dma_device_type": 2 00:21:18.995 } 00:21:18.995 ], 00:21:18.995 "driver_specific": {} 00:21:18.995 } 00:21:18.995 ] 00:21:18.995 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:18.995 12:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:18.995 12:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:18.995 12:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:19.299 BaseBdev3 00:21:19.299 12:03:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:19.299 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:19.299 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:19.299 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:19.299 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:19.299 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:19.299 12:03:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:19.565 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:19.565 [ 00:21:19.565 { 00:21:19.565 "name": "BaseBdev3", 00:21:19.565 "aliases": [ 00:21:19.565 "5d4eff93-21a8-45eb-a134-c506b1c719c2" 00:21:19.565 ], 00:21:19.565 "product_name": "Malloc disk", 00:21:19.565 "block_size": 512, 00:21:19.565 "num_blocks": 65536, 00:21:19.565 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:19.565 "assigned_rate_limits": { 00:21:19.565 "rw_ios_per_sec": 0, 00:21:19.565 "rw_mbytes_per_sec": 0, 00:21:19.565 "r_mbytes_per_sec": 0, 00:21:19.565 "w_mbytes_per_sec": 0 00:21:19.565 }, 00:21:19.565 "claimed": false, 00:21:19.565 "zoned": false, 00:21:19.565 "supported_io_types": { 00:21:19.565 "read": true, 00:21:19.565 "write": true, 00:21:19.565 "unmap": true, 00:21:19.565 "write_zeroes": true, 00:21:19.565 "flush": true, 00:21:19.565 "reset": true, 00:21:19.565 "compare": false, 00:21:19.565 "compare_and_write": false, 00:21:19.565 "abort": true, 00:21:19.565 "nvme_admin": false, 00:21:19.565 "nvme_io": false 00:21:19.565 }, 00:21:19.565 "memory_domains": [ 00:21:19.565 { 00:21:19.565 "dma_device_id": "system", 00:21:19.565 "dma_device_type": 1 00:21:19.565 }, 00:21:19.565 { 00:21:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:19.565 "dma_device_type": 2 00:21:19.565 } 00:21:19.565 ], 00:21:19.565 "driver_specific": {} 00:21:19.565 } 00:21:19.565 ] 00:21:19.565 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:19.565 12:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:19.565 12:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:19.565 12:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:19.824 BaseBdev4 00:21:19.824 12:03:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:19.824 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:19.824 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:19.824 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:19.824 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:19.824 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:19.824 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:20.083 12:03:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:20.342 [ 00:21:20.342 { 00:21:20.342 "name": "BaseBdev4", 00:21:20.342 "aliases": [ 00:21:20.342 "d8150068-de1e-4495-943c-d5d5f507db12" 00:21:20.342 ], 00:21:20.342 "product_name": "Malloc disk", 00:21:20.342 "block_size": 512, 00:21:20.342 "num_blocks": 65536, 00:21:20.342 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:20.342 "assigned_rate_limits": { 00:21:20.342 "rw_ios_per_sec": 0, 00:21:20.342 "rw_mbytes_per_sec": 0, 00:21:20.342 "r_mbytes_per_sec": 0, 00:21:20.342 "w_mbytes_per_sec": 0 00:21:20.342 }, 00:21:20.342 "claimed": false, 00:21:20.342 "zoned": false, 00:21:20.342 "supported_io_types": { 00:21:20.342 "read": true, 00:21:20.342 "write": true, 00:21:20.342 "unmap": true, 00:21:20.342 "write_zeroes": true, 00:21:20.342 "flush": true, 00:21:20.342 "reset": true, 00:21:20.342 "compare": false, 00:21:20.342 "compare_and_write": false, 00:21:20.342 "abort": true, 00:21:20.342 "nvme_admin": false, 00:21:20.342 "nvme_io": false 00:21:20.342 }, 00:21:20.342 "memory_domains": [ 00:21:20.342 { 00:21:20.342 "dma_device_id": "system", 00:21:20.342 "dma_device_type": 1 00:21:20.342 }, 00:21:20.342 { 00:21:20.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:20.342 "dma_device_type": 2 00:21:20.342 } 00:21:20.342 ], 00:21:20.342 "driver_specific": {} 00:21:20.342 } 00:21:20.342 ] 00:21:20.342 12:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:20.342 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:20.342 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:20.342 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:20.600 [2024-07-21 12:03:19.348942] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:20.600 [2024-07-21 12:03:19.349381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:20.600 [2024-07-21 12:03:19.349536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:20.600 [2024-07-21 12:03:19.351767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:20.600 [2024-07-21 12:03:19.351980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.600 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:20.859 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:20.859 "name": "Existed_Raid", 00:21:20.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.859 "strip_size_kb": 64, 00:21:20.859 "state": "configuring", 00:21:20.859 "raid_level": "raid0", 00:21:20.859 "superblock": false, 00:21:20.859 "num_base_bdevs": 4, 00:21:20.859 "num_base_bdevs_discovered": 3, 00:21:20.859 "num_base_bdevs_operational": 4, 00:21:20.859 "base_bdevs_list": [ 00:21:20.859 { 00:21:20.859 "name": "BaseBdev1", 00:21:20.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:20.859 "is_configured": false, 00:21:20.859 "data_offset": 0, 00:21:20.859 "data_size": 0 00:21:20.859 }, 00:21:20.859 { 00:21:20.859 "name": "BaseBdev2", 00:21:20.859 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:20.859 "is_configured": true, 00:21:20.859 "data_offset": 0, 00:21:20.859 "data_size": 65536 00:21:20.859 }, 00:21:20.859 { 00:21:20.859 "name": "BaseBdev3", 00:21:20.859 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:20.859 "is_configured": true, 00:21:20.859 "data_offset": 0, 00:21:20.859 "data_size": 65536 00:21:20.859 }, 00:21:20.859 { 00:21:20.859 "name": "BaseBdev4", 00:21:20.859 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:20.859 "is_configured": true, 00:21:20.859 "data_offset": 0, 00:21:20.859 "data_size": 65536 00:21:20.859 } 00:21:20.859 ] 00:21:20.859 }' 00:21:20.859 12:03:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:20.859 12:03:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:21.424 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:21.681 [2024-07-21 12:03:20.469157] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:21.681 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.938 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:21.938 "name": "Existed_Raid", 00:21:21.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.938 "strip_size_kb": 64, 00:21:21.938 "state": "configuring", 00:21:21.938 "raid_level": "raid0", 00:21:21.938 "superblock": false, 00:21:21.938 "num_base_bdevs": 4, 00:21:21.938 "num_base_bdevs_discovered": 2, 00:21:21.938 "num_base_bdevs_operational": 4, 00:21:21.938 "base_bdevs_list": [ 00:21:21.938 { 00:21:21.938 "name": "BaseBdev1", 00:21:21.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.938 "is_configured": false, 00:21:21.938 "data_offset": 0, 00:21:21.939 "data_size": 0 00:21:21.939 }, 00:21:21.939 { 00:21:21.939 "name": null, 00:21:21.939 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:21.939 "is_configured": false, 00:21:21.939 "data_offset": 0, 00:21:21.939 "data_size": 65536 00:21:21.939 }, 00:21:21.939 { 00:21:21.939 "name": "BaseBdev3", 00:21:21.939 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:21.939 "is_configured": true, 00:21:21.939 "data_offset": 0, 00:21:21.939 "data_size": 65536 00:21:21.939 }, 00:21:21.939 { 00:21:21.939 "name": "BaseBdev4", 00:21:21.939 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:21.939 "is_configured": true, 00:21:21.939 "data_offset": 0, 00:21:21.939 "data_size": 65536 00:21:21.939 } 00:21:21.939 ] 00:21:21.939 }' 00:21:21.939 12:03:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:21.939 12:03:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:22.871 12:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.871 12:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:22.871 12:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:22.871 12:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:23.129 [2024-07-21 12:03:21.952197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:23.129 BaseBdev1 00:21:23.129 12:03:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:23.129 12:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:23.129 12:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:23.129 12:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:23.129 12:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:23.129 12:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:23.129 12:03:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:23.386 12:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:23.643 [ 00:21:23.644 { 00:21:23.644 "name": "BaseBdev1", 00:21:23.644 "aliases": [ 00:21:23.644 "59212243-456b-4f51-aaa7-51e1c1c54c5e" 00:21:23.644 ], 00:21:23.644 "product_name": "Malloc disk", 00:21:23.644 "block_size": 512, 00:21:23.644 "num_blocks": 65536, 00:21:23.644 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:23.644 "assigned_rate_limits": { 00:21:23.644 "rw_ios_per_sec": 0, 00:21:23.644 "rw_mbytes_per_sec": 0, 00:21:23.644 "r_mbytes_per_sec": 0, 00:21:23.644 "w_mbytes_per_sec": 0 00:21:23.644 }, 00:21:23.644 "claimed": true, 00:21:23.644 "claim_type": "exclusive_write", 00:21:23.644 "zoned": false, 00:21:23.644 "supported_io_types": { 00:21:23.644 "read": true, 00:21:23.644 "write": true, 00:21:23.644 "unmap": true, 00:21:23.644 "write_zeroes": true, 00:21:23.644 "flush": true, 00:21:23.644 "reset": true, 00:21:23.644 "compare": false, 00:21:23.644 "compare_and_write": false, 00:21:23.644 "abort": true, 00:21:23.644 "nvme_admin": false, 00:21:23.644 "nvme_io": false 00:21:23.644 }, 00:21:23.644 "memory_domains": [ 00:21:23.644 { 00:21:23.644 "dma_device_id": "system", 00:21:23.644 "dma_device_type": 1 00:21:23.644 }, 00:21:23.644 { 00:21:23.644 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:23.644 "dma_device_type": 2 00:21:23.644 } 00:21:23.644 ], 00:21:23.644 "driver_specific": {} 00:21:23.644 } 00:21:23.644 ] 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:23.644 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:23.901 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:23.901 "name": "Existed_Raid", 00:21:23.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:23.901 "strip_size_kb": 64, 00:21:23.901 "state": "configuring", 00:21:23.901 "raid_level": "raid0", 00:21:23.901 "superblock": false, 00:21:23.901 "num_base_bdevs": 4, 00:21:23.901 "num_base_bdevs_discovered": 3, 00:21:23.901 "num_base_bdevs_operational": 4, 00:21:23.901 "base_bdevs_list": [ 00:21:23.901 { 00:21:23.901 "name": "BaseBdev1", 00:21:23.901 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:23.901 "is_configured": true, 00:21:23.901 "data_offset": 0, 00:21:23.901 "data_size": 65536 00:21:23.901 }, 00:21:23.901 { 00:21:23.901 "name": null, 00:21:23.901 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:23.901 "is_configured": false, 00:21:23.901 "data_offset": 0, 00:21:23.901 "data_size": 65536 00:21:23.901 }, 00:21:23.901 { 00:21:23.901 "name": "BaseBdev3", 00:21:23.901 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:23.901 "is_configured": true, 00:21:23.901 "data_offset": 0, 00:21:23.901 "data_size": 65536 00:21:23.901 }, 00:21:23.901 { 00:21:23.901 "name": "BaseBdev4", 00:21:23.901 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:23.901 "is_configured": true, 00:21:23.901 "data_offset": 0, 00:21:23.901 "data_size": 65536 00:21:23.901 } 00:21:23.901 ] 00:21:23.901 }' 00:21:23.901 12:03:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:23.901 12:03:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:24.465 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.465 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:25.028 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:25.028 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:25.028 [2024-07-21 12:03:23.863305] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.029 12:03:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:25.286 12:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:25.286 "name": "Existed_Raid", 00:21:25.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:25.286 "strip_size_kb": 64, 00:21:25.286 "state": "configuring", 00:21:25.286 "raid_level": "raid0", 00:21:25.286 "superblock": false, 00:21:25.286 "num_base_bdevs": 4, 00:21:25.286 "num_base_bdevs_discovered": 2, 00:21:25.286 "num_base_bdevs_operational": 4, 00:21:25.286 "base_bdevs_list": [ 00:21:25.286 { 00:21:25.286 "name": "BaseBdev1", 00:21:25.286 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:25.286 "is_configured": true, 00:21:25.286 "data_offset": 0, 00:21:25.286 "data_size": 65536 00:21:25.286 }, 00:21:25.286 { 00:21:25.286 "name": null, 00:21:25.286 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:25.286 "is_configured": false, 00:21:25.286 "data_offset": 0, 00:21:25.286 "data_size": 65536 00:21:25.286 }, 00:21:25.286 { 00:21:25.286 "name": null, 00:21:25.286 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:25.286 "is_configured": false, 00:21:25.286 "data_offset": 0, 00:21:25.286 "data_size": 65536 00:21:25.286 }, 00:21:25.286 { 00:21:25.286 "name": "BaseBdev4", 00:21:25.286 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:25.286 "is_configured": true, 00:21:25.286 "data_offset": 0, 00:21:25.286 "data_size": 65536 00:21:25.286 } 00:21:25.286 ] 00:21:25.286 }' 00:21:25.286 12:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:25.286 12:03:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:26.217 12:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.217 12:03:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:26.217 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:21:26.217 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:21:26.474 [2024-07-21 12:03:25.251666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:26.475 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.732 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:26.732 "name": "Existed_Raid", 00:21:26.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.732 "strip_size_kb": 64, 00:21:26.732 "state": "configuring", 00:21:26.732 "raid_level": "raid0", 00:21:26.732 "superblock": false, 00:21:26.732 "num_base_bdevs": 4, 00:21:26.732 "num_base_bdevs_discovered": 3, 00:21:26.732 "num_base_bdevs_operational": 4, 00:21:26.732 "base_bdevs_list": [ 00:21:26.732 { 00:21:26.732 "name": "BaseBdev1", 00:21:26.732 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:26.732 "is_configured": true, 00:21:26.732 "data_offset": 0, 00:21:26.732 "data_size": 65536 00:21:26.732 }, 00:21:26.732 { 00:21:26.732 "name": null, 00:21:26.732 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:26.732 "is_configured": false, 00:21:26.732 "data_offset": 0, 00:21:26.732 "data_size": 65536 00:21:26.732 }, 00:21:26.732 { 00:21:26.732 "name": "BaseBdev3", 00:21:26.732 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:26.732 "is_configured": true, 00:21:26.732 "data_offset": 0, 00:21:26.732 "data_size": 65536 00:21:26.732 }, 00:21:26.732 { 00:21:26.732 "name": "BaseBdev4", 00:21:26.732 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:26.732 "is_configured": true, 00:21:26.732 "data_offset": 0, 00:21:26.732 "data_size": 65536 00:21:26.732 } 00:21:26.732 ] 00:21:26.732 }' 00:21:26.732 12:03:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:26.732 12:03:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:27.665 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.665 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:21:27.665 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:21:27.665 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:27.922 [2024-07-21 12:03:26.688039] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:27.922 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.181 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:28.181 "name": "Existed_Raid", 00:21:28.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:28.181 "strip_size_kb": 64, 00:21:28.181 "state": "configuring", 00:21:28.181 "raid_level": "raid0", 00:21:28.181 "superblock": false, 00:21:28.181 "num_base_bdevs": 4, 00:21:28.181 "num_base_bdevs_discovered": 2, 00:21:28.181 "num_base_bdevs_operational": 4, 00:21:28.181 "base_bdevs_list": [ 00:21:28.181 { 00:21:28.181 "name": null, 00:21:28.181 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:28.181 "is_configured": false, 00:21:28.181 "data_offset": 0, 00:21:28.181 "data_size": 65536 00:21:28.181 }, 00:21:28.181 { 00:21:28.181 "name": null, 00:21:28.181 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:28.181 "is_configured": false, 00:21:28.181 "data_offset": 0, 00:21:28.181 "data_size": 65536 00:21:28.181 }, 00:21:28.181 { 00:21:28.181 "name": "BaseBdev3", 00:21:28.181 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:28.181 "is_configured": true, 00:21:28.181 "data_offset": 0, 00:21:28.181 "data_size": 65536 00:21:28.181 }, 00:21:28.181 { 00:21:28.181 "name": "BaseBdev4", 00:21:28.181 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:28.181 "is_configured": true, 00:21:28.181 "data_offset": 0, 00:21:28.181 "data_size": 65536 00:21:28.181 } 00:21:28.181 ] 00:21:28.181 }' 00:21:28.181 12:03:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:28.181 12:03:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:29.115 12:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.115 12:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:29.115 12:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:21:29.115 12:03:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:21:29.373 [2024-07-21 12:03:28.087604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.373 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:29.636 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:29.636 "name": "Existed_Raid", 00:21:29.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.636 "strip_size_kb": 64, 00:21:29.636 "state": "configuring", 00:21:29.636 "raid_level": "raid0", 00:21:29.636 "superblock": false, 00:21:29.636 "num_base_bdevs": 4, 00:21:29.636 "num_base_bdevs_discovered": 3, 00:21:29.636 "num_base_bdevs_operational": 4, 00:21:29.636 "base_bdevs_list": [ 00:21:29.636 { 00:21:29.636 "name": null, 00:21:29.636 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:29.636 "is_configured": false, 00:21:29.636 "data_offset": 0, 00:21:29.636 "data_size": 65536 00:21:29.636 }, 00:21:29.636 { 00:21:29.636 "name": "BaseBdev2", 00:21:29.636 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:29.636 "is_configured": true, 00:21:29.636 "data_offset": 0, 00:21:29.636 "data_size": 65536 00:21:29.636 }, 00:21:29.636 { 00:21:29.636 "name": "BaseBdev3", 00:21:29.636 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:29.636 "is_configured": true, 00:21:29.636 "data_offset": 0, 00:21:29.636 "data_size": 65536 00:21:29.636 }, 00:21:29.636 { 00:21:29.636 "name": "BaseBdev4", 00:21:29.636 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:29.636 "is_configured": true, 00:21:29.636 "data_offset": 0, 00:21:29.636 "data_size": 65536 00:21:29.636 } 00:21:29.636 ] 00:21:29.636 }' 00:21:29.636 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:29.636 12:03:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:30.213 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.213 12:03:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:30.471 12:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:21:30.471 12:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:21:30.471 12:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:30.730 12:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 59212243-456b-4f51-aaa7-51e1c1c54c5e 00:21:30.988 [2024-07-21 12:03:29.789815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:21:30.988 [2024-07-21 12:03:29.790160] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:30.988 [2024-07-21 12:03:29.790213] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:21:30.988 [2024-07-21 12:03:29.790431] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:21:30.988 [2024-07-21 12:03:29.790860] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:30.988 [2024-07-21 12:03:29.791043] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:21:30.988 [2024-07-21 12:03:29.791414] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:30.988 NewBaseBdev 00:21:30.988 12:03:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:21:30.988 12:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:21:30.988 12:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:30.989 12:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:21:30.989 12:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:30.989 12:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:30.989 12:03:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:31.246 12:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:21:31.504 [ 00:21:31.504 { 00:21:31.504 "name": "NewBaseBdev", 00:21:31.504 "aliases": [ 00:21:31.504 "59212243-456b-4f51-aaa7-51e1c1c54c5e" 00:21:31.504 ], 00:21:31.504 "product_name": "Malloc disk", 00:21:31.504 "block_size": 512, 00:21:31.504 "num_blocks": 65536, 00:21:31.504 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:31.504 "assigned_rate_limits": { 00:21:31.504 "rw_ios_per_sec": 0, 00:21:31.504 "rw_mbytes_per_sec": 0, 00:21:31.504 "r_mbytes_per_sec": 0, 00:21:31.504 "w_mbytes_per_sec": 0 00:21:31.504 }, 00:21:31.504 "claimed": true, 00:21:31.504 "claim_type": "exclusive_write", 00:21:31.504 "zoned": false, 00:21:31.504 "supported_io_types": { 00:21:31.504 "read": true, 00:21:31.504 "write": true, 00:21:31.504 "unmap": true, 00:21:31.504 "write_zeroes": true, 00:21:31.504 "flush": true, 00:21:31.504 "reset": true, 00:21:31.504 "compare": false, 00:21:31.504 "compare_and_write": false, 00:21:31.504 "abort": true, 00:21:31.504 "nvme_admin": false, 00:21:31.504 "nvme_io": false 00:21:31.504 }, 00:21:31.504 "memory_domains": [ 00:21:31.504 { 00:21:31.504 "dma_device_id": "system", 00:21:31.504 "dma_device_type": 1 00:21:31.504 }, 00:21:31.504 { 00:21:31.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:31.504 "dma_device_type": 2 00:21:31.504 } 00:21:31.504 ], 00:21:31.504 "driver_specific": {} 00:21:31.504 } 00:21:31.504 ] 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.504 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:31.763 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:31.763 "name": "Existed_Raid", 00:21:31.763 "uuid": "19fdffe8-b4f3-4be0-815f-a85ed1ee42f2", 00:21:31.763 "strip_size_kb": 64, 00:21:31.763 "state": "online", 00:21:31.763 "raid_level": "raid0", 00:21:31.763 "superblock": false, 00:21:31.763 "num_base_bdevs": 4, 00:21:31.763 "num_base_bdevs_discovered": 4, 00:21:31.763 "num_base_bdevs_operational": 4, 00:21:31.763 "base_bdevs_list": [ 00:21:31.763 { 00:21:31.763 "name": "NewBaseBdev", 00:21:31.763 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:31.763 "is_configured": true, 00:21:31.763 "data_offset": 0, 00:21:31.763 "data_size": 65536 00:21:31.763 }, 00:21:31.763 { 00:21:31.763 "name": "BaseBdev2", 00:21:31.763 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:31.763 "is_configured": true, 00:21:31.763 "data_offset": 0, 00:21:31.763 "data_size": 65536 00:21:31.763 }, 00:21:31.763 { 00:21:31.763 "name": "BaseBdev3", 00:21:31.763 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:31.763 "is_configured": true, 00:21:31.763 "data_offset": 0, 00:21:31.763 "data_size": 65536 00:21:31.763 }, 00:21:31.763 { 00:21:31.763 "name": "BaseBdev4", 00:21:31.763 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:31.763 "is_configured": true, 00:21:31.763 "data_offset": 0, 00:21:31.763 "data_size": 65536 00:21:31.763 } 00:21:31.763 ] 00:21:31.763 }' 00:21:31.763 12:03:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:31.763 12:03:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:32.699 [2024-07-21 12:03:31.514571] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:32.699 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:32.699 "name": "Existed_Raid", 00:21:32.699 "aliases": [ 00:21:32.699 "19fdffe8-b4f3-4be0-815f-a85ed1ee42f2" 00:21:32.699 ], 00:21:32.699 "product_name": "Raid Volume", 00:21:32.699 "block_size": 512, 00:21:32.699 "num_blocks": 262144, 00:21:32.699 "uuid": "19fdffe8-b4f3-4be0-815f-a85ed1ee42f2", 00:21:32.699 "assigned_rate_limits": { 00:21:32.699 "rw_ios_per_sec": 0, 00:21:32.699 "rw_mbytes_per_sec": 0, 00:21:32.699 "r_mbytes_per_sec": 0, 00:21:32.699 "w_mbytes_per_sec": 0 00:21:32.699 }, 00:21:32.699 "claimed": false, 00:21:32.699 "zoned": false, 00:21:32.699 "supported_io_types": { 00:21:32.699 "read": true, 00:21:32.699 "write": true, 00:21:32.699 "unmap": true, 00:21:32.699 "write_zeroes": true, 00:21:32.699 "flush": true, 00:21:32.699 "reset": true, 00:21:32.699 "compare": false, 00:21:32.699 "compare_and_write": false, 00:21:32.699 "abort": false, 00:21:32.699 "nvme_admin": false, 00:21:32.699 "nvme_io": false 00:21:32.699 }, 00:21:32.699 "memory_domains": [ 00:21:32.699 { 00:21:32.699 "dma_device_id": "system", 00:21:32.699 "dma_device_type": 1 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.699 "dma_device_type": 2 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "dma_device_id": "system", 00:21:32.699 "dma_device_type": 1 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.699 "dma_device_type": 2 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "dma_device_id": "system", 00:21:32.699 "dma_device_type": 1 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.699 "dma_device_type": 2 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "dma_device_id": "system", 00:21:32.699 "dma_device_type": 1 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.699 "dma_device_type": 2 00:21:32.699 } 00:21:32.699 ], 00:21:32.699 "driver_specific": { 00:21:32.699 "raid": { 00:21:32.699 "uuid": "19fdffe8-b4f3-4be0-815f-a85ed1ee42f2", 00:21:32.699 "strip_size_kb": 64, 00:21:32.699 "state": "online", 00:21:32.699 "raid_level": "raid0", 00:21:32.699 "superblock": false, 00:21:32.699 "num_base_bdevs": 4, 00:21:32.699 "num_base_bdevs_discovered": 4, 00:21:32.699 "num_base_bdevs_operational": 4, 00:21:32.699 "base_bdevs_list": [ 00:21:32.699 { 00:21:32.699 "name": "NewBaseBdev", 00:21:32.699 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:32.699 "is_configured": true, 00:21:32.699 "data_offset": 0, 00:21:32.699 "data_size": 65536 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "name": "BaseBdev2", 00:21:32.699 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:32.699 "is_configured": true, 00:21:32.699 "data_offset": 0, 00:21:32.699 "data_size": 65536 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "name": "BaseBdev3", 00:21:32.699 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:32.699 "is_configured": true, 00:21:32.699 "data_offset": 0, 00:21:32.699 "data_size": 65536 00:21:32.699 }, 00:21:32.699 { 00:21:32.699 "name": "BaseBdev4", 00:21:32.699 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:32.699 "is_configured": true, 00:21:32.699 "data_offset": 0, 00:21:32.699 "data_size": 65536 00:21:32.699 } 00:21:32.699 ] 00:21:32.699 } 00:21:32.699 } 00:21:32.699 }' 00:21:32.700 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:32.957 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:21:32.957 BaseBdev2 00:21:32.957 BaseBdev3 00:21:32.957 BaseBdev4' 00:21:32.957 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:32.958 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:21:32.958 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:32.958 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:32.958 "name": "NewBaseBdev", 00:21:32.958 "aliases": [ 00:21:32.958 "59212243-456b-4f51-aaa7-51e1c1c54c5e" 00:21:32.958 ], 00:21:32.958 "product_name": "Malloc disk", 00:21:32.958 "block_size": 512, 00:21:32.958 "num_blocks": 65536, 00:21:32.958 "uuid": "59212243-456b-4f51-aaa7-51e1c1c54c5e", 00:21:32.958 "assigned_rate_limits": { 00:21:32.958 "rw_ios_per_sec": 0, 00:21:32.958 "rw_mbytes_per_sec": 0, 00:21:32.958 "r_mbytes_per_sec": 0, 00:21:32.958 "w_mbytes_per_sec": 0 00:21:32.958 }, 00:21:32.958 "claimed": true, 00:21:32.958 "claim_type": "exclusive_write", 00:21:32.958 "zoned": false, 00:21:32.958 "supported_io_types": { 00:21:32.958 "read": true, 00:21:32.958 "write": true, 00:21:32.958 "unmap": true, 00:21:32.958 "write_zeroes": true, 00:21:32.958 "flush": true, 00:21:32.958 "reset": true, 00:21:32.958 "compare": false, 00:21:32.958 "compare_and_write": false, 00:21:32.958 "abort": true, 00:21:32.958 "nvme_admin": false, 00:21:32.958 "nvme_io": false 00:21:32.958 }, 00:21:32.958 "memory_domains": [ 00:21:32.958 { 00:21:32.958 "dma_device_id": "system", 00:21:32.958 "dma_device_type": 1 00:21:32.958 }, 00:21:32.958 { 00:21:32.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:32.958 "dma_device_type": 2 00:21:32.958 } 00:21:32.958 ], 00:21:32.958 "driver_specific": {} 00:21:32.958 }' 00:21:32.958 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.216 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.216 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:33.216 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.216 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.216 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:33.216 12:03:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.216 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.474 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.474 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.474 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.474 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:33.474 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:33.474 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:33.474 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:33.732 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:33.732 "name": "BaseBdev2", 00:21:33.732 "aliases": [ 00:21:33.732 "b19ae253-94a1-479d-9d8f-dee9fe190151" 00:21:33.732 ], 00:21:33.732 "product_name": "Malloc disk", 00:21:33.732 "block_size": 512, 00:21:33.732 "num_blocks": 65536, 00:21:33.732 "uuid": "b19ae253-94a1-479d-9d8f-dee9fe190151", 00:21:33.732 "assigned_rate_limits": { 00:21:33.732 "rw_ios_per_sec": 0, 00:21:33.732 "rw_mbytes_per_sec": 0, 00:21:33.732 "r_mbytes_per_sec": 0, 00:21:33.732 "w_mbytes_per_sec": 0 00:21:33.732 }, 00:21:33.732 "claimed": true, 00:21:33.732 "claim_type": "exclusive_write", 00:21:33.732 "zoned": false, 00:21:33.732 "supported_io_types": { 00:21:33.732 "read": true, 00:21:33.732 "write": true, 00:21:33.732 "unmap": true, 00:21:33.732 "write_zeroes": true, 00:21:33.732 "flush": true, 00:21:33.732 "reset": true, 00:21:33.732 "compare": false, 00:21:33.732 "compare_and_write": false, 00:21:33.732 "abort": true, 00:21:33.732 "nvme_admin": false, 00:21:33.732 "nvme_io": false 00:21:33.732 }, 00:21:33.732 "memory_domains": [ 00:21:33.732 { 00:21:33.732 "dma_device_id": "system", 00:21:33.732 "dma_device_type": 1 00:21:33.732 }, 00:21:33.732 { 00:21:33.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:33.732 "dma_device_type": 2 00:21:33.732 } 00:21:33.732 ], 00:21:33.732 "driver_specific": {} 00:21:33.732 }' 00:21:33.732 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.732 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:33.732 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:33.732 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.990 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:33.990 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:33.990 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.990 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:33.990 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:33.990 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:33.990 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.254 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.255 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.255 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:34.255 12:03:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:34.517 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:34.517 "name": "BaseBdev3", 00:21:34.517 "aliases": [ 00:21:34.517 "5d4eff93-21a8-45eb-a134-c506b1c719c2" 00:21:34.517 ], 00:21:34.517 "product_name": "Malloc disk", 00:21:34.517 "block_size": 512, 00:21:34.517 "num_blocks": 65536, 00:21:34.517 "uuid": "5d4eff93-21a8-45eb-a134-c506b1c719c2", 00:21:34.517 "assigned_rate_limits": { 00:21:34.517 "rw_ios_per_sec": 0, 00:21:34.517 "rw_mbytes_per_sec": 0, 00:21:34.517 "r_mbytes_per_sec": 0, 00:21:34.517 "w_mbytes_per_sec": 0 00:21:34.517 }, 00:21:34.517 "claimed": true, 00:21:34.518 "claim_type": "exclusive_write", 00:21:34.518 "zoned": false, 00:21:34.518 "supported_io_types": { 00:21:34.518 "read": true, 00:21:34.518 "write": true, 00:21:34.518 "unmap": true, 00:21:34.518 "write_zeroes": true, 00:21:34.518 "flush": true, 00:21:34.518 "reset": true, 00:21:34.518 "compare": false, 00:21:34.518 "compare_and_write": false, 00:21:34.518 "abort": true, 00:21:34.518 "nvme_admin": false, 00:21:34.518 "nvme_io": false 00:21:34.518 }, 00:21:34.518 "memory_domains": [ 00:21:34.518 { 00:21:34.518 "dma_device_id": "system", 00:21:34.518 "dma_device_type": 1 00:21:34.518 }, 00:21:34.518 { 00:21:34.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:34.518 "dma_device_type": 2 00:21:34.518 } 00:21:34.518 ], 00:21:34.518 "driver_specific": {} 00:21:34.518 }' 00:21:34.518 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.518 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:34.518 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:34.518 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.518 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:34.518 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:34.518 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:34.775 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:35.032 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:35.032 "name": "BaseBdev4", 00:21:35.032 "aliases": [ 00:21:35.032 "d8150068-de1e-4495-943c-d5d5f507db12" 00:21:35.032 ], 00:21:35.032 "product_name": "Malloc disk", 00:21:35.032 "block_size": 512, 00:21:35.032 "num_blocks": 65536, 00:21:35.032 "uuid": "d8150068-de1e-4495-943c-d5d5f507db12", 00:21:35.032 "assigned_rate_limits": { 00:21:35.032 "rw_ios_per_sec": 0, 00:21:35.032 "rw_mbytes_per_sec": 0, 00:21:35.032 "r_mbytes_per_sec": 0, 00:21:35.032 "w_mbytes_per_sec": 0 00:21:35.032 }, 00:21:35.032 "claimed": true, 00:21:35.032 "claim_type": "exclusive_write", 00:21:35.033 "zoned": false, 00:21:35.033 "supported_io_types": { 00:21:35.033 "read": true, 00:21:35.033 "write": true, 00:21:35.033 "unmap": true, 00:21:35.033 "write_zeroes": true, 00:21:35.033 "flush": true, 00:21:35.033 "reset": true, 00:21:35.033 "compare": false, 00:21:35.033 "compare_and_write": false, 00:21:35.033 "abort": true, 00:21:35.033 "nvme_admin": false, 00:21:35.033 "nvme_io": false 00:21:35.033 }, 00:21:35.033 "memory_domains": [ 00:21:35.033 { 00:21:35.033 "dma_device_id": "system", 00:21:35.033 "dma_device_type": 1 00:21:35.033 }, 00:21:35.033 { 00:21:35.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:35.033 "dma_device_type": 2 00:21:35.033 } 00:21:35.033 ], 00:21:35.033 "driver_specific": {} 00:21:35.033 }' 00:21:35.033 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:35.033 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:35.290 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:35.290 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:35.290 12:03:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:35.290 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:35.290 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.290 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:35.290 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:35.290 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.554 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:35.554 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:35.554 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:35.818 [2024-07-21 12:03:34.419208] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:35.818 [2024-07-21 12:03:34.419540] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.818 [2024-07-21 12:03:34.419748] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.818 [2024-07-21 12:03:34.419953] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.818 [2024-07-21 12:03:34.420070] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 144628 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 144628 ']' 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 144628 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 144628 00:21:35.818 killing process with pid 144628 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 144628' 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 144628 00:21:35.818 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 144628 00:21:35.818 [2024-07-21 12:03:34.461804] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:35.818 [2024-07-21 12:03:34.502091] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:21:36.076 00:21:36.076 real 0m34.655s 00:21:36.076 user 1m5.641s 00:21:36.076 ************************************ 00:21:36.076 END TEST raid_state_function_test 00:21:36.076 ************************************ 00:21:36.076 sys 0m4.386s 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:21:36.076 12:03:34 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:21:36.076 12:03:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:21:36.076 12:03:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:36.076 12:03:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:36.076 ************************************ 00:21:36.076 START TEST raid_state_function_test_sb 00:21:36.076 ************************************ 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid0 4 true 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=145743 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 145743' 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:36.076 Process raid pid: 145743 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 145743 /var/tmp/spdk-raid.sock 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 145743 ']' 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:36.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:36.076 12:03:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:36.076 [2024-07-21 12:03:34.863421] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:36.076 [2024-07-21 12:03:34.863764] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.334 [2024-07-21 12:03:35.026684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.334 [2024-07-21 12:03:35.124203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.334 [2024-07-21 12:03:35.184866] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:37.268 12:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:37.268 12:03:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:21:37.268 12:03:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:37.268 [2024-07-21 12:03:36.013811] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:37.268 [2024-07-21 12:03:36.014187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:37.268 [2024-07-21 12:03:36.014333] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:37.268 [2024-07-21 12:03:36.014398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:37.268 [2024-07-21 12:03:36.014512] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:37.268 [2024-07-21 12:03:36.014615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:37.268 [2024-07-21 12:03:36.014802] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:37.268 [2024-07-21 12:03:36.014888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:37.268 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:37.526 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:37.526 "name": "Existed_Raid", 00:21:37.527 "uuid": "f6795bde-b77b-49f0-864d-a80d73b0b20a", 00:21:37.527 "strip_size_kb": 64, 00:21:37.527 "state": "configuring", 00:21:37.527 "raid_level": "raid0", 00:21:37.527 "superblock": true, 00:21:37.527 "num_base_bdevs": 4, 00:21:37.527 "num_base_bdevs_discovered": 0, 00:21:37.527 "num_base_bdevs_operational": 4, 00:21:37.527 "base_bdevs_list": [ 00:21:37.527 { 00:21:37.527 "name": "BaseBdev1", 00:21:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.527 "is_configured": false, 00:21:37.527 "data_offset": 0, 00:21:37.527 "data_size": 0 00:21:37.527 }, 00:21:37.527 { 00:21:37.527 "name": "BaseBdev2", 00:21:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.527 "is_configured": false, 00:21:37.527 "data_offset": 0, 00:21:37.527 "data_size": 0 00:21:37.527 }, 00:21:37.527 { 00:21:37.527 "name": "BaseBdev3", 00:21:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.527 "is_configured": false, 00:21:37.527 "data_offset": 0, 00:21:37.527 "data_size": 0 00:21:37.527 }, 00:21:37.527 { 00:21:37.527 "name": "BaseBdev4", 00:21:37.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:37.527 "is_configured": false, 00:21:37.527 "data_offset": 0, 00:21:37.527 "data_size": 0 00:21:37.527 } 00:21:37.527 ] 00:21:37.527 }' 00:21:37.527 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:37.527 12:03:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:38.093 12:03:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:38.350 [2024-07-21 12:03:37.105878] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:38.350 [2024-07-21 12:03:37.106193] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:21:38.350 12:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:38.609 [2024-07-21 12:03:37.377959] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:38.609 [2024-07-21 12:03:37.378211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:38.609 [2024-07-21 12:03:37.378352] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:38.609 [2024-07-21 12:03:37.378516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:38.609 [2024-07-21 12:03:37.378649] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:38.609 [2024-07-21 12:03:37.378773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:38.609 [2024-07-21 12:03:37.378887] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:38.609 [2024-07-21 12:03:37.379014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:38.609 12:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:38.882 [2024-07-21 12:03:37.605376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.882 BaseBdev1 00:21:38.882 12:03:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:21:38.882 12:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:38.882 12:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:38.882 12:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:38.882 12:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:38.882 12:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:38.882 12:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:39.194 12:03:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:39.453 [ 00:21:39.453 { 00:21:39.453 "name": "BaseBdev1", 00:21:39.453 "aliases": [ 00:21:39.453 "c1fb363b-dcdb-442f-9d60-953887d9b2c9" 00:21:39.453 ], 00:21:39.453 "product_name": "Malloc disk", 00:21:39.453 "block_size": 512, 00:21:39.453 "num_blocks": 65536, 00:21:39.453 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:39.453 "assigned_rate_limits": { 00:21:39.453 "rw_ios_per_sec": 0, 00:21:39.453 "rw_mbytes_per_sec": 0, 00:21:39.453 "r_mbytes_per_sec": 0, 00:21:39.453 "w_mbytes_per_sec": 0 00:21:39.453 }, 00:21:39.453 "claimed": true, 00:21:39.453 "claim_type": "exclusive_write", 00:21:39.453 "zoned": false, 00:21:39.453 "supported_io_types": { 00:21:39.453 "read": true, 00:21:39.453 "write": true, 00:21:39.453 "unmap": true, 00:21:39.453 "write_zeroes": true, 00:21:39.453 "flush": true, 00:21:39.453 "reset": true, 00:21:39.453 "compare": false, 00:21:39.453 "compare_and_write": false, 00:21:39.453 "abort": true, 00:21:39.453 "nvme_admin": false, 00:21:39.453 "nvme_io": false 00:21:39.453 }, 00:21:39.453 "memory_domains": [ 00:21:39.453 { 00:21:39.453 "dma_device_id": "system", 00:21:39.453 "dma_device_type": 1 00:21:39.453 }, 00:21:39.453 { 00:21:39.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:39.453 "dma_device_type": 2 00:21:39.453 } 00:21:39.453 ], 00:21:39.453 "driver_specific": {} 00:21:39.453 } 00:21:39.453 ] 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.453 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:39.711 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:39.711 "name": "Existed_Raid", 00:21:39.711 "uuid": "7a1cea8e-af9b-4bed-9a9a-743be833e87a", 00:21:39.711 "strip_size_kb": 64, 00:21:39.711 "state": "configuring", 00:21:39.711 "raid_level": "raid0", 00:21:39.711 "superblock": true, 00:21:39.711 "num_base_bdevs": 4, 00:21:39.711 "num_base_bdevs_discovered": 1, 00:21:39.711 "num_base_bdevs_operational": 4, 00:21:39.711 "base_bdevs_list": [ 00:21:39.711 { 00:21:39.711 "name": "BaseBdev1", 00:21:39.711 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:39.711 "is_configured": true, 00:21:39.711 "data_offset": 2048, 00:21:39.711 "data_size": 63488 00:21:39.711 }, 00:21:39.711 { 00:21:39.711 "name": "BaseBdev2", 00:21:39.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.711 "is_configured": false, 00:21:39.711 "data_offset": 0, 00:21:39.711 "data_size": 0 00:21:39.711 }, 00:21:39.711 { 00:21:39.711 "name": "BaseBdev3", 00:21:39.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.711 "is_configured": false, 00:21:39.711 "data_offset": 0, 00:21:39.711 "data_size": 0 00:21:39.711 }, 00:21:39.711 { 00:21:39.711 "name": "BaseBdev4", 00:21:39.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.711 "is_configured": false, 00:21:39.711 "data_offset": 0, 00:21:39.711 "data_size": 0 00:21:39.711 } 00:21:39.711 ] 00:21:39.711 }' 00:21:39.711 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:39.711 12:03:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:40.278 12:03:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:40.537 [2024-07-21 12:03:39.201792] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:40.537 [2024-07-21 12:03:39.202171] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:21:40.537 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:40.795 [2024-07-21 12:03:39.425904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:40.795 [2024-07-21 12:03:39.428559] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:40.795 [2024-07-21 12:03:39.428821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:40.795 [2024-07-21 12:03:39.428961] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:40.795 [2024-07-21 12:03:39.429030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:40.795 [2024-07-21 12:03:39.429135] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:21:40.795 [2024-07-21 12:03:39.429196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.795 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:41.053 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:41.053 "name": "Existed_Raid", 00:21:41.053 "uuid": "001363c9-88e4-4168-aee9-56f8315f3f34", 00:21:41.053 "strip_size_kb": 64, 00:21:41.053 "state": "configuring", 00:21:41.053 "raid_level": "raid0", 00:21:41.053 "superblock": true, 00:21:41.053 "num_base_bdevs": 4, 00:21:41.053 "num_base_bdevs_discovered": 1, 00:21:41.053 "num_base_bdevs_operational": 4, 00:21:41.053 "base_bdevs_list": [ 00:21:41.053 { 00:21:41.053 "name": "BaseBdev1", 00:21:41.053 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:41.053 "is_configured": true, 00:21:41.053 "data_offset": 2048, 00:21:41.053 "data_size": 63488 00:21:41.053 }, 00:21:41.053 { 00:21:41.053 "name": "BaseBdev2", 00:21:41.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.053 "is_configured": false, 00:21:41.053 "data_offset": 0, 00:21:41.053 "data_size": 0 00:21:41.053 }, 00:21:41.053 { 00:21:41.053 "name": "BaseBdev3", 00:21:41.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.053 "is_configured": false, 00:21:41.053 "data_offset": 0, 00:21:41.053 "data_size": 0 00:21:41.053 }, 00:21:41.053 { 00:21:41.053 "name": "BaseBdev4", 00:21:41.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:41.053 "is_configured": false, 00:21:41.053 "data_offset": 0, 00:21:41.053 "data_size": 0 00:21:41.053 } 00:21:41.053 ] 00:21:41.053 }' 00:21:41.053 12:03:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:41.053 12:03:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:41.673 12:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:41.932 [2024-07-21 12:03:40.570646] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:41.932 BaseBdev2 00:21:41.932 12:03:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:21:41.932 12:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:41.932 12:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:41.932 12:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:41.932 12:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:41.932 12:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:41.932 12:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:42.190 12:03:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:42.190 [ 00:21:42.190 { 00:21:42.190 "name": "BaseBdev2", 00:21:42.190 "aliases": [ 00:21:42.190 "eaaecb41-01d4-44ab-b33d-5471bb787a46" 00:21:42.190 ], 00:21:42.190 "product_name": "Malloc disk", 00:21:42.190 "block_size": 512, 00:21:42.190 "num_blocks": 65536, 00:21:42.190 "uuid": "eaaecb41-01d4-44ab-b33d-5471bb787a46", 00:21:42.190 "assigned_rate_limits": { 00:21:42.190 "rw_ios_per_sec": 0, 00:21:42.190 "rw_mbytes_per_sec": 0, 00:21:42.190 "r_mbytes_per_sec": 0, 00:21:42.190 "w_mbytes_per_sec": 0 00:21:42.190 }, 00:21:42.190 "claimed": true, 00:21:42.190 "claim_type": "exclusive_write", 00:21:42.190 "zoned": false, 00:21:42.190 "supported_io_types": { 00:21:42.190 "read": true, 00:21:42.190 "write": true, 00:21:42.190 "unmap": true, 00:21:42.190 "write_zeroes": true, 00:21:42.190 "flush": true, 00:21:42.190 "reset": true, 00:21:42.190 "compare": false, 00:21:42.190 "compare_and_write": false, 00:21:42.190 "abort": true, 00:21:42.190 "nvme_admin": false, 00:21:42.190 "nvme_io": false 00:21:42.190 }, 00:21:42.190 "memory_domains": [ 00:21:42.190 { 00:21:42.190 "dma_device_id": "system", 00:21:42.190 "dma_device_type": 1 00:21:42.190 }, 00:21:42.190 { 00:21:42.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:42.190 "dma_device_type": 2 00:21:42.190 } 00:21:42.190 ], 00:21:42.190 "driver_specific": {} 00:21:42.190 } 00:21:42.190 ] 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:42.190 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:42.448 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.448 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:42.706 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:42.706 "name": "Existed_Raid", 00:21:42.706 "uuid": "001363c9-88e4-4168-aee9-56f8315f3f34", 00:21:42.706 "strip_size_kb": 64, 00:21:42.706 "state": "configuring", 00:21:42.706 "raid_level": "raid0", 00:21:42.706 "superblock": true, 00:21:42.706 "num_base_bdevs": 4, 00:21:42.706 "num_base_bdevs_discovered": 2, 00:21:42.706 "num_base_bdevs_operational": 4, 00:21:42.706 "base_bdevs_list": [ 00:21:42.706 { 00:21:42.706 "name": "BaseBdev1", 00:21:42.706 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:42.706 "is_configured": true, 00:21:42.706 "data_offset": 2048, 00:21:42.706 "data_size": 63488 00:21:42.706 }, 00:21:42.706 { 00:21:42.706 "name": "BaseBdev2", 00:21:42.706 "uuid": "eaaecb41-01d4-44ab-b33d-5471bb787a46", 00:21:42.706 "is_configured": true, 00:21:42.706 "data_offset": 2048, 00:21:42.706 "data_size": 63488 00:21:42.706 }, 00:21:42.706 { 00:21:42.706 "name": "BaseBdev3", 00:21:42.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.706 "is_configured": false, 00:21:42.706 "data_offset": 0, 00:21:42.706 "data_size": 0 00:21:42.706 }, 00:21:42.706 { 00:21:42.706 "name": "BaseBdev4", 00:21:42.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.706 "is_configured": false, 00:21:42.706 "data_offset": 0, 00:21:42.706 "data_size": 0 00:21:42.706 } 00:21:42.706 ] 00:21:42.706 }' 00:21:42.706 12:03:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:42.706 12:03:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:43.272 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:43.530 [2024-07-21 12:03:42.224050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:43.530 BaseBdev3 00:21:43.530 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:21:43.530 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:43.530 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:43.530 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:43.530 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:43.530 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:43.530 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:43.788 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:44.045 [ 00:21:44.045 { 00:21:44.045 "name": "BaseBdev3", 00:21:44.045 "aliases": [ 00:21:44.045 "bbcd706e-5de5-4293-8c87-25f7017b9fe3" 00:21:44.045 ], 00:21:44.045 "product_name": "Malloc disk", 00:21:44.045 "block_size": 512, 00:21:44.045 "num_blocks": 65536, 00:21:44.045 "uuid": "bbcd706e-5de5-4293-8c87-25f7017b9fe3", 00:21:44.045 "assigned_rate_limits": { 00:21:44.045 "rw_ios_per_sec": 0, 00:21:44.045 "rw_mbytes_per_sec": 0, 00:21:44.045 "r_mbytes_per_sec": 0, 00:21:44.045 "w_mbytes_per_sec": 0 00:21:44.045 }, 00:21:44.045 "claimed": true, 00:21:44.045 "claim_type": "exclusive_write", 00:21:44.045 "zoned": false, 00:21:44.045 "supported_io_types": { 00:21:44.045 "read": true, 00:21:44.045 "write": true, 00:21:44.045 "unmap": true, 00:21:44.045 "write_zeroes": true, 00:21:44.045 "flush": true, 00:21:44.045 "reset": true, 00:21:44.045 "compare": false, 00:21:44.045 "compare_and_write": false, 00:21:44.045 "abort": true, 00:21:44.045 "nvme_admin": false, 00:21:44.045 "nvme_io": false 00:21:44.045 }, 00:21:44.045 "memory_domains": [ 00:21:44.045 { 00:21:44.045 "dma_device_id": "system", 00:21:44.045 "dma_device_type": 1 00:21:44.045 }, 00:21:44.045 { 00:21:44.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.045 "dma_device_type": 2 00:21:44.045 } 00:21:44.045 ], 00:21:44.045 "driver_specific": {} 00:21:44.045 } 00:21:44.045 ] 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:44.045 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.046 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.303 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:44.303 "name": "Existed_Raid", 00:21:44.303 "uuid": "001363c9-88e4-4168-aee9-56f8315f3f34", 00:21:44.303 "strip_size_kb": 64, 00:21:44.303 "state": "configuring", 00:21:44.303 "raid_level": "raid0", 00:21:44.303 "superblock": true, 00:21:44.304 "num_base_bdevs": 4, 00:21:44.304 "num_base_bdevs_discovered": 3, 00:21:44.304 "num_base_bdevs_operational": 4, 00:21:44.304 "base_bdevs_list": [ 00:21:44.304 { 00:21:44.304 "name": "BaseBdev1", 00:21:44.304 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:44.304 "is_configured": true, 00:21:44.304 "data_offset": 2048, 00:21:44.304 "data_size": 63488 00:21:44.304 }, 00:21:44.304 { 00:21:44.304 "name": "BaseBdev2", 00:21:44.304 "uuid": "eaaecb41-01d4-44ab-b33d-5471bb787a46", 00:21:44.304 "is_configured": true, 00:21:44.304 "data_offset": 2048, 00:21:44.304 "data_size": 63488 00:21:44.304 }, 00:21:44.304 { 00:21:44.304 "name": "BaseBdev3", 00:21:44.304 "uuid": "bbcd706e-5de5-4293-8c87-25f7017b9fe3", 00:21:44.304 "is_configured": true, 00:21:44.304 "data_offset": 2048, 00:21:44.304 "data_size": 63488 00:21:44.304 }, 00:21:44.304 { 00:21:44.304 "name": "BaseBdev4", 00:21:44.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.304 "is_configured": false, 00:21:44.304 "data_offset": 0, 00:21:44.304 "data_size": 0 00:21:44.304 } 00:21:44.304 ] 00:21:44.304 }' 00:21:44.304 12:03:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:44.304 12:03:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:44.869 12:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:45.127 [2024-07-21 12:03:43.893179] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:45.127 [2024-07-21 12:03:43.893767] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:21:45.127 [2024-07-21 12:03:43.893900] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:21:45.127 [2024-07-21 12:03:43.894090] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:21:45.127 [2024-07-21 12:03:43.894529] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:21:45.127 [2024-07-21 12:03:43.894717] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:21:45.127 BaseBdev4 00:21:45.127 [2024-07-21 12:03:43.895023] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.127 12:03:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:21:45.127 12:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:45.127 12:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:45.127 12:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:45.127 12:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:45.127 12:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:45.127 12:03:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:45.385 12:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:45.643 [ 00:21:45.643 { 00:21:45.643 "name": "BaseBdev4", 00:21:45.643 "aliases": [ 00:21:45.643 "95bdc534-57b7-4783-916f-c79120522649" 00:21:45.643 ], 00:21:45.643 "product_name": "Malloc disk", 00:21:45.643 "block_size": 512, 00:21:45.643 "num_blocks": 65536, 00:21:45.643 "uuid": "95bdc534-57b7-4783-916f-c79120522649", 00:21:45.643 "assigned_rate_limits": { 00:21:45.643 "rw_ios_per_sec": 0, 00:21:45.643 "rw_mbytes_per_sec": 0, 00:21:45.643 "r_mbytes_per_sec": 0, 00:21:45.643 "w_mbytes_per_sec": 0 00:21:45.643 }, 00:21:45.643 "claimed": true, 00:21:45.643 "claim_type": "exclusive_write", 00:21:45.643 "zoned": false, 00:21:45.643 "supported_io_types": { 00:21:45.643 "read": true, 00:21:45.643 "write": true, 00:21:45.643 "unmap": true, 00:21:45.643 "write_zeroes": true, 00:21:45.643 "flush": true, 00:21:45.643 "reset": true, 00:21:45.643 "compare": false, 00:21:45.643 "compare_and_write": false, 00:21:45.643 "abort": true, 00:21:45.643 "nvme_admin": false, 00:21:45.643 "nvme_io": false 00:21:45.643 }, 00:21:45.643 "memory_domains": [ 00:21:45.643 { 00:21:45.643 "dma_device_id": "system", 00:21:45.643 "dma_device_type": 1 00:21:45.643 }, 00:21:45.643 { 00:21:45.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:45.643 "dma_device_type": 2 00:21:45.643 } 00:21:45.643 ], 00:21:45.643 "driver_specific": {} 00:21:45.643 } 00:21:45.643 ] 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.643 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.901 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:45.901 "name": "Existed_Raid", 00:21:45.901 "uuid": "001363c9-88e4-4168-aee9-56f8315f3f34", 00:21:45.901 "strip_size_kb": 64, 00:21:45.901 "state": "online", 00:21:45.901 "raid_level": "raid0", 00:21:45.901 "superblock": true, 00:21:45.901 "num_base_bdevs": 4, 00:21:45.901 "num_base_bdevs_discovered": 4, 00:21:45.901 "num_base_bdevs_operational": 4, 00:21:45.901 "base_bdevs_list": [ 00:21:45.901 { 00:21:45.901 "name": "BaseBdev1", 00:21:45.901 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:45.901 "is_configured": true, 00:21:45.901 "data_offset": 2048, 00:21:45.901 "data_size": 63488 00:21:45.901 }, 00:21:45.901 { 00:21:45.901 "name": "BaseBdev2", 00:21:45.901 "uuid": "eaaecb41-01d4-44ab-b33d-5471bb787a46", 00:21:45.901 "is_configured": true, 00:21:45.901 "data_offset": 2048, 00:21:45.901 "data_size": 63488 00:21:45.901 }, 00:21:45.901 { 00:21:45.901 "name": "BaseBdev3", 00:21:45.901 "uuid": "bbcd706e-5de5-4293-8c87-25f7017b9fe3", 00:21:45.901 "is_configured": true, 00:21:45.901 "data_offset": 2048, 00:21:45.901 "data_size": 63488 00:21:45.901 }, 00:21:45.901 { 00:21:45.901 "name": "BaseBdev4", 00:21:45.901 "uuid": "95bdc534-57b7-4783-916f-c79120522649", 00:21:45.901 "is_configured": true, 00:21:45.901 "data_offset": 2048, 00:21:45.901 "data_size": 63488 00:21:45.901 } 00:21:45.901 ] 00:21:45.901 }' 00:21:45.901 12:03:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:45.901 12:03:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:21:46.467 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:21:46.725 [2024-07-21 12:03:45.521986] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:46.725 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:21:46.725 "name": "Existed_Raid", 00:21:46.725 "aliases": [ 00:21:46.725 "001363c9-88e4-4168-aee9-56f8315f3f34" 00:21:46.725 ], 00:21:46.725 "product_name": "Raid Volume", 00:21:46.725 "block_size": 512, 00:21:46.725 "num_blocks": 253952, 00:21:46.725 "uuid": "001363c9-88e4-4168-aee9-56f8315f3f34", 00:21:46.725 "assigned_rate_limits": { 00:21:46.725 "rw_ios_per_sec": 0, 00:21:46.725 "rw_mbytes_per_sec": 0, 00:21:46.725 "r_mbytes_per_sec": 0, 00:21:46.725 "w_mbytes_per_sec": 0 00:21:46.725 }, 00:21:46.725 "claimed": false, 00:21:46.725 "zoned": false, 00:21:46.725 "supported_io_types": { 00:21:46.725 "read": true, 00:21:46.725 "write": true, 00:21:46.725 "unmap": true, 00:21:46.725 "write_zeroes": true, 00:21:46.725 "flush": true, 00:21:46.725 "reset": true, 00:21:46.725 "compare": false, 00:21:46.725 "compare_and_write": false, 00:21:46.725 "abort": false, 00:21:46.725 "nvme_admin": false, 00:21:46.725 "nvme_io": false 00:21:46.725 }, 00:21:46.725 "memory_domains": [ 00:21:46.725 { 00:21:46.725 "dma_device_id": "system", 00:21:46.725 "dma_device_type": 1 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.725 "dma_device_type": 2 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "dma_device_id": "system", 00:21:46.725 "dma_device_type": 1 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.725 "dma_device_type": 2 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "dma_device_id": "system", 00:21:46.725 "dma_device_type": 1 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.725 "dma_device_type": 2 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "dma_device_id": "system", 00:21:46.725 "dma_device_type": 1 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.725 "dma_device_type": 2 00:21:46.725 } 00:21:46.725 ], 00:21:46.725 "driver_specific": { 00:21:46.725 "raid": { 00:21:46.725 "uuid": "001363c9-88e4-4168-aee9-56f8315f3f34", 00:21:46.725 "strip_size_kb": 64, 00:21:46.725 "state": "online", 00:21:46.725 "raid_level": "raid0", 00:21:46.725 "superblock": true, 00:21:46.725 "num_base_bdevs": 4, 00:21:46.725 "num_base_bdevs_discovered": 4, 00:21:46.725 "num_base_bdevs_operational": 4, 00:21:46.725 "base_bdevs_list": [ 00:21:46.725 { 00:21:46.725 "name": "BaseBdev1", 00:21:46.725 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:46.725 "is_configured": true, 00:21:46.725 "data_offset": 2048, 00:21:46.725 "data_size": 63488 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "name": "BaseBdev2", 00:21:46.725 "uuid": "eaaecb41-01d4-44ab-b33d-5471bb787a46", 00:21:46.725 "is_configured": true, 00:21:46.725 "data_offset": 2048, 00:21:46.725 "data_size": 63488 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "name": "BaseBdev3", 00:21:46.725 "uuid": "bbcd706e-5de5-4293-8c87-25f7017b9fe3", 00:21:46.725 "is_configured": true, 00:21:46.725 "data_offset": 2048, 00:21:46.725 "data_size": 63488 00:21:46.725 }, 00:21:46.725 { 00:21:46.725 "name": "BaseBdev4", 00:21:46.725 "uuid": "95bdc534-57b7-4783-916f-c79120522649", 00:21:46.725 "is_configured": true, 00:21:46.725 "data_offset": 2048, 00:21:46.725 "data_size": 63488 00:21:46.725 } 00:21:46.725 ] 00:21:46.725 } 00:21:46.725 } 00:21:46.725 }' 00:21:46.725 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:21:46.725 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:21:46.725 BaseBdev2 00:21:46.725 BaseBdev3 00:21:46.725 BaseBdev4' 00:21:46.725 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:46.725 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:21:46.725 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.290 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.290 "name": "BaseBdev1", 00:21:47.290 "aliases": [ 00:21:47.290 "c1fb363b-dcdb-442f-9d60-953887d9b2c9" 00:21:47.290 ], 00:21:47.290 "product_name": "Malloc disk", 00:21:47.290 "block_size": 512, 00:21:47.290 "num_blocks": 65536, 00:21:47.290 "uuid": "c1fb363b-dcdb-442f-9d60-953887d9b2c9", 00:21:47.290 "assigned_rate_limits": { 00:21:47.290 "rw_ios_per_sec": 0, 00:21:47.290 "rw_mbytes_per_sec": 0, 00:21:47.290 "r_mbytes_per_sec": 0, 00:21:47.290 "w_mbytes_per_sec": 0 00:21:47.290 }, 00:21:47.290 "claimed": true, 00:21:47.290 "claim_type": "exclusive_write", 00:21:47.290 "zoned": false, 00:21:47.290 "supported_io_types": { 00:21:47.290 "read": true, 00:21:47.290 "write": true, 00:21:47.290 "unmap": true, 00:21:47.290 "write_zeroes": true, 00:21:47.290 "flush": true, 00:21:47.290 "reset": true, 00:21:47.290 "compare": false, 00:21:47.290 "compare_and_write": false, 00:21:47.290 "abort": true, 00:21:47.290 "nvme_admin": false, 00:21:47.290 "nvme_io": false 00:21:47.290 }, 00:21:47.290 "memory_domains": [ 00:21:47.290 { 00:21:47.290 "dma_device_id": "system", 00:21:47.290 "dma_device_type": 1 00:21:47.290 }, 00:21:47.290 { 00:21:47.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.290 "dma_device_type": 2 00:21:47.290 } 00:21:47.290 ], 00:21:47.290 "driver_specific": {} 00:21:47.290 }' 00:21:47.290 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.290 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.290 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:47.290 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.290 12:03:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:47.290 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:47.290 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.290 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:47.290 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:47.290 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.548 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:47.548 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:47.548 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:47.548 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:21:47.548 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:47.806 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:47.806 "name": "BaseBdev2", 00:21:47.806 "aliases": [ 00:21:47.806 "eaaecb41-01d4-44ab-b33d-5471bb787a46" 00:21:47.806 ], 00:21:47.806 "product_name": "Malloc disk", 00:21:47.806 "block_size": 512, 00:21:47.806 "num_blocks": 65536, 00:21:47.806 "uuid": "eaaecb41-01d4-44ab-b33d-5471bb787a46", 00:21:47.806 "assigned_rate_limits": { 00:21:47.806 "rw_ios_per_sec": 0, 00:21:47.806 "rw_mbytes_per_sec": 0, 00:21:47.806 "r_mbytes_per_sec": 0, 00:21:47.806 "w_mbytes_per_sec": 0 00:21:47.806 }, 00:21:47.806 "claimed": true, 00:21:47.806 "claim_type": "exclusive_write", 00:21:47.806 "zoned": false, 00:21:47.806 "supported_io_types": { 00:21:47.806 "read": true, 00:21:47.806 "write": true, 00:21:47.806 "unmap": true, 00:21:47.806 "write_zeroes": true, 00:21:47.806 "flush": true, 00:21:47.806 "reset": true, 00:21:47.806 "compare": false, 00:21:47.806 "compare_and_write": false, 00:21:47.806 "abort": true, 00:21:47.806 "nvme_admin": false, 00:21:47.806 "nvme_io": false 00:21:47.806 }, 00:21:47.806 "memory_domains": [ 00:21:47.806 { 00:21:47.806 "dma_device_id": "system", 00:21:47.806 "dma_device_type": 1 00:21:47.806 }, 00:21:47.806 { 00:21:47.806 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.806 "dma_device_type": 2 00:21:47.806 } 00:21:47.806 ], 00:21:47.806 "driver_specific": {} 00:21:47.806 }' 00:21:47.806 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.806 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:47.806 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:47.806 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.063 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.063 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:48.063 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.063 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.063 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:48.063 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.063 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.321 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:48.321 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:48.321 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:21:48.321 12:03:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:48.578 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:48.578 "name": "BaseBdev3", 00:21:48.578 "aliases": [ 00:21:48.578 "bbcd706e-5de5-4293-8c87-25f7017b9fe3" 00:21:48.578 ], 00:21:48.578 "product_name": "Malloc disk", 00:21:48.578 "block_size": 512, 00:21:48.578 "num_blocks": 65536, 00:21:48.578 "uuid": "bbcd706e-5de5-4293-8c87-25f7017b9fe3", 00:21:48.578 "assigned_rate_limits": { 00:21:48.578 "rw_ios_per_sec": 0, 00:21:48.578 "rw_mbytes_per_sec": 0, 00:21:48.578 "r_mbytes_per_sec": 0, 00:21:48.578 "w_mbytes_per_sec": 0 00:21:48.578 }, 00:21:48.578 "claimed": true, 00:21:48.578 "claim_type": "exclusive_write", 00:21:48.578 "zoned": false, 00:21:48.578 "supported_io_types": { 00:21:48.578 "read": true, 00:21:48.578 "write": true, 00:21:48.578 "unmap": true, 00:21:48.578 "write_zeroes": true, 00:21:48.578 "flush": true, 00:21:48.579 "reset": true, 00:21:48.579 "compare": false, 00:21:48.579 "compare_and_write": false, 00:21:48.579 "abort": true, 00:21:48.579 "nvme_admin": false, 00:21:48.579 "nvme_io": false 00:21:48.579 }, 00:21:48.579 "memory_domains": [ 00:21:48.579 { 00:21:48.579 "dma_device_id": "system", 00:21:48.579 "dma_device_type": 1 00:21:48.579 }, 00:21:48.579 { 00:21:48.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:48.579 "dma_device_type": 2 00:21:48.579 } 00:21:48.579 ], 00:21:48.579 "driver_specific": {} 00:21:48.579 }' 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.579 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:48.836 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:48.836 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.836 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:48.836 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:48.836 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:21:48.836 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:21:48.836 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:21:49.094 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:21:49.094 "name": "BaseBdev4", 00:21:49.094 "aliases": [ 00:21:49.094 "95bdc534-57b7-4783-916f-c79120522649" 00:21:49.094 ], 00:21:49.094 "product_name": "Malloc disk", 00:21:49.094 "block_size": 512, 00:21:49.094 "num_blocks": 65536, 00:21:49.094 "uuid": "95bdc534-57b7-4783-916f-c79120522649", 00:21:49.094 "assigned_rate_limits": { 00:21:49.094 "rw_ios_per_sec": 0, 00:21:49.094 "rw_mbytes_per_sec": 0, 00:21:49.094 "r_mbytes_per_sec": 0, 00:21:49.094 "w_mbytes_per_sec": 0 00:21:49.094 }, 00:21:49.094 "claimed": true, 00:21:49.094 "claim_type": "exclusive_write", 00:21:49.094 "zoned": false, 00:21:49.094 "supported_io_types": { 00:21:49.094 "read": true, 00:21:49.094 "write": true, 00:21:49.094 "unmap": true, 00:21:49.094 "write_zeroes": true, 00:21:49.094 "flush": true, 00:21:49.094 "reset": true, 00:21:49.094 "compare": false, 00:21:49.094 "compare_and_write": false, 00:21:49.094 "abort": true, 00:21:49.094 "nvme_admin": false, 00:21:49.094 "nvme_io": false 00:21:49.094 }, 00:21:49.094 "memory_domains": [ 00:21:49.094 { 00:21:49.094 "dma_device_id": "system", 00:21:49.094 "dma_device_type": 1 00:21:49.094 }, 00:21:49.094 { 00:21:49.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:49.094 "dma_device_type": 2 00:21:49.094 } 00:21:49.094 ], 00:21:49.094 "driver_specific": {} 00:21:49.094 }' 00:21:49.094 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.094 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:21:49.094 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:21:49.094 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.352 12:03:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:21:49.352 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:21:49.352 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.352 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:21:49.352 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:21:49.352 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.352 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:21:49.609 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:21:49.609 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:49.867 [2024-07-21 12:03:48.514542] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:49.867 [2024-07-21 12:03:48.514949] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.867 [2024-07-21 12:03:48.515152] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.867 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:50.125 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:50.125 "name": "Existed_Raid", 00:21:50.125 "uuid": "001363c9-88e4-4168-aee9-56f8315f3f34", 00:21:50.125 "strip_size_kb": 64, 00:21:50.125 "state": "offline", 00:21:50.125 "raid_level": "raid0", 00:21:50.125 "superblock": true, 00:21:50.125 "num_base_bdevs": 4, 00:21:50.125 "num_base_bdevs_discovered": 3, 00:21:50.125 "num_base_bdevs_operational": 3, 00:21:50.125 "base_bdevs_list": [ 00:21:50.125 { 00:21:50.125 "name": null, 00:21:50.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.125 "is_configured": false, 00:21:50.125 "data_offset": 2048, 00:21:50.125 "data_size": 63488 00:21:50.125 }, 00:21:50.125 { 00:21:50.125 "name": "BaseBdev2", 00:21:50.125 "uuid": "eaaecb41-01d4-44ab-b33d-5471bb787a46", 00:21:50.125 "is_configured": true, 00:21:50.125 "data_offset": 2048, 00:21:50.125 "data_size": 63488 00:21:50.125 }, 00:21:50.125 { 00:21:50.125 "name": "BaseBdev3", 00:21:50.125 "uuid": "bbcd706e-5de5-4293-8c87-25f7017b9fe3", 00:21:50.125 "is_configured": true, 00:21:50.125 "data_offset": 2048, 00:21:50.125 "data_size": 63488 00:21:50.125 }, 00:21:50.125 { 00:21:50.125 "name": "BaseBdev4", 00:21:50.125 "uuid": "95bdc534-57b7-4783-916f-c79120522649", 00:21:50.125 "is_configured": true, 00:21:50.125 "data_offset": 2048, 00:21:50.125 "data_size": 63488 00:21:50.125 } 00:21:50.125 ] 00:21:50.125 }' 00:21:50.125 12:03:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:50.125 12:03:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:50.692 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:21:50.692 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:50.692 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.692 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:50.950 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:50.950 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:50.950 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:51.208 [2024-07-21 12:03:49.951175] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:51.208 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:51.208 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:51.208 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.208 12:03:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:51.466 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:51.466 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:51.466 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:51.724 [2024-07-21 12:03:50.423179] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:51.724 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:51.724 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:51.724 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:51.724 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:21:51.982 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:21:51.982 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:51.982 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:21:52.239 [2024-07-21 12:03:50.938906] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:21:52.239 [2024-07-21 12:03:50.939257] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:21:52.239 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:21:52.239 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:21:52.239 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.239 12:03:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:21:52.496 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:21:52.496 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:21:52.496 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:21:52.496 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:21:52.496 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:52.496 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:52.754 BaseBdev2 00:21:52.754 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:21:52.754 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:21:52.754 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:52.754 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:52.754 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:52.754 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:52.754 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.012 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:53.270 [ 00:21:53.270 { 00:21:53.270 "name": "BaseBdev2", 00:21:53.270 "aliases": [ 00:21:53.270 "749b90cb-eb75-4394-8f1b-5741da579a88" 00:21:53.270 ], 00:21:53.270 "product_name": "Malloc disk", 00:21:53.270 "block_size": 512, 00:21:53.270 "num_blocks": 65536, 00:21:53.270 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:21:53.270 "assigned_rate_limits": { 00:21:53.270 "rw_ios_per_sec": 0, 00:21:53.270 "rw_mbytes_per_sec": 0, 00:21:53.270 "r_mbytes_per_sec": 0, 00:21:53.270 "w_mbytes_per_sec": 0 00:21:53.270 }, 00:21:53.271 "claimed": false, 00:21:53.271 "zoned": false, 00:21:53.271 "supported_io_types": { 00:21:53.271 "read": true, 00:21:53.271 "write": true, 00:21:53.271 "unmap": true, 00:21:53.271 "write_zeroes": true, 00:21:53.271 "flush": true, 00:21:53.271 "reset": true, 00:21:53.271 "compare": false, 00:21:53.271 "compare_and_write": false, 00:21:53.271 "abort": true, 00:21:53.271 "nvme_admin": false, 00:21:53.271 "nvme_io": false 00:21:53.271 }, 00:21:53.271 "memory_domains": [ 00:21:53.271 { 00:21:53.271 "dma_device_id": "system", 00:21:53.271 "dma_device_type": 1 00:21:53.271 }, 00:21:53.271 { 00:21:53.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.271 "dma_device_type": 2 00:21:53.271 } 00:21:53.271 ], 00:21:53.271 "driver_specific": {} 00:21:53.271 } 00:21:53.271 ] 00:21:53.271 12:03:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:53.271 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:53.271 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:53.271 12:03:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:53.271 BaseBdev3 00:21:53.528 12:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:21:53.529 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:21:53.529 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:53.529 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:53.529 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:53.529 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:53.529 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.529 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:53.786 [ 00:21:53.786 { 00:21:53.786 "name": "BaseBdev3", 00:21:53.786 "aliases": [ 00:21:53.786 "edda7e1d-068a-4bd8-9775-b639d1c6885e" 00:21:53.786 ], 00:21:53.786 "product_name": "Malloc disk", 00:21:53.786 "block_size": 512, 00:21:53.786 "num_blocks": 65536, 00:21:53.786 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:21:53.786 "assigned_rate_limits": { 00:21:53.786 "rw_ios_per_sec": 0, 00:21:53.786 "rw_mbytes_per_sec": 0, 00:21:53.786 "r_mbytes_per_sec": 0, 00:21:53.786 "w_mbytes_per_sec": 0 00:21:53.786 }, 00:21:53.786 "claimed": false, 00:21:53.786 "zoned": false, 00:21:53.786 "supported_io_types": { 00:21:53.786 "read": true, 00:21:53.786 "write": true, 00:21:53.786 "unmap": true, 00:21:53.786 "write_zeroes": true, 00:21:53.786 "flush": true, 00:21:53.786 "reset": true, 00:21:53.786 "compare": false, 00:21:53.786 "compare_and_write": false, 00:21:53.786 "abort": true, 00:21:53.786 "nvme_admin": false, 00:21:53.786 "nvme_io": false 00:21:53.786 }, 00:21:53.786 "memory_domains": [ 00:21:53.786 { 00:21:53.786 "dma_device_id": "system", 00:21:53.786 "dma_device_type": 1 00:21:53.786 }, 00:21:53.786 { 00:21:53.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.786 "dma_device_type": 2 00:21:53.786 } 00:21:53.786 ], 00:21:53.786 "driver_specific": {} 00:21:53.786 } 00:21:53.786 ] 00:21:53.786 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:53.786 12:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:53.786 12:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:53.786 12:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:54.044 BaseBdev4 00:21:54.044 12:03:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:21:54.044 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:21:54.044 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:54.044 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:54.044 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:54.044 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:54.044 12:03:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:54.301 12:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:21:54.559 [ 00:21:54.559 { 00:21:54.559 "name": "BaseBdev4", 00:21:54.559 "aliases": [ 00:21:54.559 "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116" 00:21:54.559 ], 00:21:54.559 "product_name": "Malloc disk", 00:21:54.559 "block_size": 512, 00:21:54.559 "num_blocks": 65536, 00:21:54.559 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:21:54.559 "assigned_rate_limits": { 00:21:54.559 "rw_ios_per_sec": 0, 00:21:54.559 "rw_mbytes_per_sec": 0, 00:21:54.559 "r_mbytes_per_sec": 0, 00:21:54.559 "w_mbytes_per_sec": 0 00:21:54.559 }, 00:21:54.559 "claimed": false, 00:21:54.559 "zoned": false, 00:21:54.559 "supported_io_types": { 00:21:54.559 "read": true, 00:21:54.559 "write": true, 00:21:54.559 "unmap": true, 00:21:54.559 "write_zeroes": true, 00:21:54.559 "flush": true, 00:21:54.559 "reset": true, 00:21:54.559 "compare": false, 00:21:54.559 "compare_and_write": false, 00:21:54.559 "abort": true, 00:21:54.559 "nvme_admin": false, 00:21:54.559 "nvme_io": false 00:21:54.559 }, 00:21:54.559 "memory_domains": [ 00:21:54.559 { 00:21:54.559 "dma_device_id": "system", 00:21:54.559 "dma_device_type": 1 00:21:54.559 }, 00:21:54.559 { 00:21:54.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:54.559 "dma_device_type": 2 00:21:54.559 } 00:21:54.559 ], 00:21:54.559 "driver_specific": {} 00:21:54.559 } 00:21:54.559 ] 00:21:54.559 12:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:54.559 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:21:54.559 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:21:54.559 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:21:54.817 [2024-07-21 12:03:53.535005] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:54.817 [2024-07-21 12:03:53.535100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:54.817 [2024-07-21 12:03:53.535149] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:54.817 [2024-07-21 12:03:53.537359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.817 [2024-07-21 12:03:53.537435] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.817 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.074 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:55.074 "name": "Existed_Raid", 00:21:55.074 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:21:55.074 "strip_size_kb": 64, 00:21:55.074 "state": "configuring", 00:21:55.074 "raid_level": "raid0", 00:21:55.074 "superblock": true, 00:21:55.074 "num_base_bdevs": 4, 00:21:55.074 "num_base_bdevs_discovered": 3, 00:21:55.074 "num_base_bdevs_operational": 4, 00:21:55.074 "base_bdevs_list": [ 00:21:55.074 { 00:21:55.074 "name": "BaseBdev1", 00:21:55.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.074 "is_configured": false, 00:21:55.074 "data_offset": 0, 00:21:55.074 "data_size": 0 00:21:55.074 }, 00:21:55.074 { 00:21:55.074 "name": "BaseBdev2", 00:21:55.074 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:21:55.074 "is_configured": true, 00:21:55.074 "data_offset": 2048, 00:21:55.074 "data_size": 63488 00:21:55.074 }, 00:21:55.074 { 00:21:55.074 "name": "BaseBdev3", 00:21:55.074 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:21:55.074 "is_configured": true, 00:21:55.074 "data_offset": 2048, 00:21:55.074 "data_size": 63488 00:21:55.074 }, 00:21:55.074 { 00:21:55.074 "name": "BaseBdev4", 00:21:55.074 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:21:55.074 "is_configured": true, 00:21:55.074 "data_offset": 2048, 00:21:55.074 "data_size": 63488 00:21:55.074 } 00:21:55.074 ] 00:21:55.074 }' 00:21:55.074 12:03:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:55.074 12:03:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:55.641 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:55.898 [2024-07-21 12:03:54.587192] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.898 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.156 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:56.156 "name": "Existed_Raid", 00:21:56.156 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:21:56.156 "strip_size_kb": 64, 00:21:56.156 "state": "configuring", 00:21:56.156 "raid_level": "raid0", 00:21:56.156 "superblock": true, 00:21:56.156 "num_base_bdevs": 4, 00:21:56.156 "num_base_bdevs_discovered": 2, 00:21:56.156 "num_base_bdevs_operational": 4, 00:21:56.156 "base_bdevs_list": [ 00:21:56.156 { 00:21:56.156 "name": "BaseBdev1", 00:21:56.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.156 "is_configured": false, 00:21:56.156 "data_offset": 0, 00:21:56.156 "data_size": 0 00:21:56.156 }, 00:21:56.156 { 00:21:56.156 "name": null, 00:21:56.156 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:21:56.156 "is_configured": false, 00:21:56.156 "data_offset": 2048, 00:21:56.156 "data_size": 63488 00:21:56.156 }, 00:21:56.156 { 00:21:56.156 "name": "BaseBdev3", 00:21:56.156 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:21:56.156 "is_configured": true, 00:21:56.156 "data_offset": 2048, 00:21:56.156 "data_size": 63488 00:21:56.156 }, 00:21:56.156 { 00:21:56.156 "name": "BaseBdev4", 00:21:56.156 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:21:56.156 "is_configured": true, 00:21:56.156 "data_offset": 2048, 00:21:56.156 "data_size": 63488 00:21:56.156 } 00:21:56.156 ] 00:21:56.156 }' 00:21:56.156 12:03:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:56.156 12:03:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:56.720 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:21:56.720 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.978 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:21:56.979 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:57.236 [2024-07-21 12:03:55.948317] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:57.236 BaseBdev1 00:21:57.236 12:03:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:21:57.236 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:21:57.236 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:21:57.236 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:21:57.236 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:21:57.236 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:21:57.236 12:03:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.494 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:57.752 [ 00:21:57.752 { 00:21:57.752 "name": "BaseBdev1", 00:21:57.752 "aliases": [ 00:21:57.752 "8399dfd5-d668-4fc9-abc5-0c93e3d4a231" 00:21:57.752 ], 00:21:57.752 "product_name": "Malloc disk", 00:21:57.752 "block_size": 512, 00:21:57.752 "num_blocks": 65536, 00:21:57.752 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:21:57.752 "assigned_rate_limits": { 00:21:57.752 "rw_ios_per_sec": 0, 00:21:57.752 "rw_mbytes_per_sec": 0, 00:21:57.752 "r_mbytes_per_sec": 0, 00:21:57.752 "w_mbytes_per_sec": 0 00:21:57.752 }, 00:21:57.752 "claimed": true, 00:21:57.752 "claim_type": "exclusive_write", 00:21:57.752 "zoned": false, 00:21:57.752 "supported_io_types": { 00:21:57.752 "read": true, 00:21:57.752 "write": true, 00:21:57.752 "unmap": true, 00:21:57.752 "write_zeroes": true, 00:21:57.753 "flush": true, 00:21:57.753 "reset": true, 00:21:57.753 "compare": false, 00:21:57.753 "compare_and_write": false, 00:21:57.753 "abort": true, 00:21:57.753 "nvme_admin": false, 00:21:57.753 "nvme_io": false 00:21:57.753 }, 00:21:57.753 "memory_domains": [ 00:21:57.753 { 00:21:57.753 "dma_device_id": "system", 00:21:57.753 "dma_device_type": 1 00:21:57.753 }, 00:21:57.753 { 00:21:57.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:57.753 "dma_device_type": 2 00:21:57.753 } 00:21:57.753 ], 00:21:57.753 "driver_specific": {} 00:21:57.753 } 00:21:57.753 ] 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.753 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.011 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:58.011 "name": "Existed_Raid", 00:21:58.011 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:21:58.011 "strip_size_kb": 64, 00:21:58.011 "state": "configuring", 00:21:58.011 "raid_level": "raid0", 00:21:58.011 "superblock": true, 00:21:58.011 "num_base_bdevs": 4, 00:21:58.011 "num_base_bdevs_discovered": 3, 00:21:58.011 "num_base_bdevs_operational": 4, 00:21:58.011 "base_bdevs_list": [ 00:21:58.011 { 00:21:58.011 "name": "BaseBdev1", 00:21:58.011 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:21:58.011 "is_configured": true, 00:21:58.011 "data_offset": 2048, 00:21:58.011 "data_size": 63488 00:21:58.011 }, 00:21:58.011 { 00:21:58.011 "name": null, 00:21:58.011 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:21:58.011 "is_configured": false, 00:21:58.011 "data_offset": 2048, 00:21:58.011 "data_size": 63488 00:21:58.011 }, 00:21:58.011 { 00:21:58.011 "name": "BaseBdev3", 00:21:58.011 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:21:58.011 "is_configured": true, 00:21:58.011 "data_offset": 2048, 00:21:58.011 "data_size": 63488 00:21:58.011 }, 00:21:58.011 { 00:21:58.011 "name": "BaseBdev4", 00:21:58.011 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:21:58.011 "is_configured": true, 00:21:58.011 "data_offset": 2048, 00:21:58.011 "data_size": 63488 00:21:58.011 } 00:21:58.011 ] 00:21:58.011 }' 00:21:58.011 12:03:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:58.011 12:03:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:21:58.577 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.577 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:21:58.834 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:21:58.834 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:21:59.092 [2024-07-21 12:03:57.940904] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.350 12:03:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:59.350 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:21:59.350 "name": "Existed_Raid", 00:21:59.350 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:21:59.350 "strip_size_kb": 64, 00:21:59.350 "state": "configuring", 00:21:59.350 "raid_level": "raid0", 00:21:59.350 "superblock": true, 00:21:59.350 "num_base_bdevs": 4, 00:21:59.350 "num_base_bdevs_discovered": 2, 00:21:59.350 "num_base_bdevs_operational": 4, 00:21:59.350 "base_bdevs_list": [ 00:21:59.350 { 00:21:59.350 "name": "BaseBdev1", 00:21:59.350 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:21:59.350 "is_configured": true, 00:21:59.350 "data_offset": 2048, 00:21:59.350 "data_size": 63488 00:21:59.350 }, 00:21:59.350 { 00:21:59.351 "name": null, 00:21:59.351 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:21:59.351 "is_configured": false, 00:21:59.351 "data_offset": 2048, 00:21:59.351 "data_size": 63488 00:21:59.351 }, 00:21:59.351 { 00:21:59.351 "name": null, 00:21:59.351 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:21:59.351 "is_configured": false, 00:21:59.351 "data_offset": 2048, 00:21:59.351 "data_size": 63488 00:21:59.351 }, 00:21:59.351 { 00:21:59.351 "name": "BaseBdev4", 00:21:59.351 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:21:59.351 "is_configured": true, 00:21:59.351 "data_offset": 2048, 00:21:59.351 "data_size": 63488 00:21:59.351 } 00:21:59.351 ] 00:21:59.351 }' 00:21:59.351 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:21:59.351 12:03:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:00.283 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.283 12:03:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:00.283 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:22:00.283 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:22:00.542 [2024-07-21 12:03:59.353280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.542 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:01.107 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:01.107 "name": "Existed_Raid", 00:22:01.107 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:22:01.107 "strip_size_kb": 64, 00:22:01.107 "state": "configuring", 00:22:01.107 "raid_level": "raid0", 00:22:01.107 "superblock": true, 00:22:01.107 "num_base_bdevs": 4, 00:22:01.107 "num_base_bdevs_discovered": 3, 00:22:01.107 "num_base_bdevs_operational": 4, 00:22:01.107 "base_bdevs_list": [ 00:22:01.107 { 00:22:01.107 "name": "BaseBdev1", 00:22:01.107 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:22:01.107 "is_configured": true, 00:22:01.107 "data_offset": 2048, 00:22:01.107 "data_size": 63488 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "name": null, 00:22:01.107 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:22:01.107 "is_configured": false, 00:22:01.107 "data_offset": 2048, 00:22:01.107 "data_size": 63488 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "name": "BaseBdev3", 00:22:01.107 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:22:01.107 "is_configured": true, 00:22:01.107 "data_offset": 2048, 00:22:01.107 "data_size": 63488 00:22:01.107 }, 00:22:01.107 { 00:22:01.107 "name": "BaseBdev4", 00:22:01.107 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:22:01.107 "is_configured": true, 00:22:01.107 "data_offset": 2048, 00:22:01.107 "data_size": 63488 00:22:01.107 } 00:22:01.107 ] 00:22:01.107 }' 00:22:01.107 12:03:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:01.107 12:03:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:01.702 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:22:01.702 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.959 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:22:01.959 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:02.235 [2024-07-21 12:04:00.889614] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.235 12:04:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:02.507 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:02.507 "name": "Existed_Raid", 00:22:02.507 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:22:02.507 "strip_size_kb": 64, 00:22:02.507 "state": "configuring", 00:22:02.507 "raid_level": "raid0", 00:22:02.507 "superblock": true, 00:22:02.507 "num_base_bdevs": 4, 00:22:02.507 "num_base_bdevs_discovered": 2, 00:22:02.507 "num_base_bdevs_operational": 4, 00:22:02.507 "base_bdevs_list": [ 00:22:02.507 { 00:22:02.507 "name": null, 00:22:02.507 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:22:02.507 "is_configured": false, 00:22:02.507 "data_offset": 2048, 00:22:02.507 "data_size": 63488 00:22:02.507 }, 00:22:02.507 { 00:22:02.507 "name": null, 00:22:02.507 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:22:02.507 "is_configured": false, 00:22:02.507 "data_offset": 2048, 00:22:02.507 "data_size": 63488 00:22:02.507 }, 00:22:02.507 { 00:22:02.507 "name": "BaseBdev3", 00:22:02.507 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:22:02.507 "is_configured": true, 00:22:02.507 "data_offset": 2048, 00:22:02.507 "data_size": 63488 00:22:02.507 }, 00:22:02.507 { 00:22:02.507 "name": "BaseBdev4", 00:22:02.507 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:22:02.507 "is_configured": true, 00:22:02.507 "data_offset": 2048, 00:22:02.507 "data_size": 63488 00:22:02.507 } 00:22:02.507 ] 00:22:02.507 }' 00:22:02.507 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:02.507 12:04:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:03.074 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.074 12:04:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:22:03.332 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:22:03.332 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:22:03.590 [2024-07-21 12:04:02.353901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:03.590 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:03.847 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:03.847 "name": "Existed_Raid", 00:22:03.847 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:22:03.847 "strip_size_kb": 64, 00:22:03.847 "state": "configuring", 00:22:03.847 "raid_level": "raid0", 00:22:03.847 "superblock": true, 00:22:03.847 "num_base_bdevs": 4, 00:22:03.847 "num_base_bdevs_discovered": 3, 00:22:03.847 "num_base_bdevs_operational": 4, 00:22:03.847 "base_bdevs_list": [ 00:22:03.847 { 00:22:03.847 "name": null, 00:22:03.847 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:22:03.847 "is_configured": false, 00:22:03.847 "data_offset": 2048, 00:22:03.847 "data_size": 63488 00:22:03.847 }, 00:22:03.847 { 00:22:03.847 "name": "BaseBdev2", 00:22:03.847 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:22:03.847 "is_configured": true, 00:22:03.847 "data_offset": 2048, 00:22:03.847 "data_size": 63488 00:22:03.847 }, 00:22:03.847 { 00:22:03.847 "name": "BaseBdev3", 00:22:03.847 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:22:03.847 "is_configured": true, 00:22:03.847 "data_offset": 2048, 00:22:03.847 "data_size": 63488 00:22:03.847 }, 00:22:03.847 { 00:22:03.847 "name": "BaseBdev4", 00:22:03.847 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:22:03.847 "is_configured": true, 00:22:03.847 "data_offset": 2048, 00:22:03.847 "data_size": 63488 00:22:03.847 } 00:22:03.847 ] 00:22:03.847 }' 00:22:03.847 12:04:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:03.847 12:04:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:04.412 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:22:04.412 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.669 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:22:04.669 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.669 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:22:04.926 12:04:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 8399dfd5-d668-4fc9-abc5-0c93e3d4a231 00:22:05.182 [2024-07-21 12:04:04.019493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:22:05.182 [2024-07-21 12:04:04.019784] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:05.182 [2024-07-21 12:04:04.019802] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:05.182 [2024-07-21 12:04:04.019884] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:05.182 [2024-07-21 12:04:04.020280] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:05.182 [2024-07-21 12:04:04.020307] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:22:05.182 [2024-07-21 12:04:04.020432] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.182 NewBaseBdev 00:22:05.182 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:22:05.182 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:22:05.182 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:05.182 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:22:05.182 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:05.182 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:05.182 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:05.438 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:22:05.696 [ 00:22:05.696 { 00:22:05.696 "name": "NewBaseBdev", 00:22:05.696 "aliases": [ 00:22:05.696 "8399dfd5-d668-4fc9-abc5-0c93e3d4a231" 00:22:05.696 ], 00:22:05.696 "product_name": "Malloc disk", 00:22:05.696 "block_size": 512, 00:22:05.696 "num_blocks": 65536, 00:22:05.696 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:22:05.696 "assigned_rate_limits": { 00:22:05.696 "rw_ios_per_sec": 0, 00:22:05.696 "rw_mbytes_per_sec": 0, 00:22:05.696 "r_mbytes_per_sec": 0, 00:22:05.696 "w_mbytes_per_sec": 0 00:22:05.696 }, 00:22:05.696 "claimed": true, 00:22:05.696 "claim_type": "exclusive_write", 00:22:05.696 "zoned": false, 00:22:05.696 "supported_io_types": { 00:22:05.696 "read": true, 00:22:05.696 "write": true, 00:22:05.696 "unmap": true, 00:22:05.696 "write_zeroes": true, 00:22:05.696 "flush": true, 00:22:05.696 "reset": true, 00:22:05.696 "compare": false, 00:22:05.696 "compare_and_write": false, 00:22:05.696 "abort": true, 00:22:05.696 "nvme_admin": false, 00:22:05.696 "nvme_io": false 00:22:05.696 }, 00:22:05.696 "memory_domains": [ 00:22:05.696 { 00:22:05.696 "dma_device_id": "system", 00:22:05.696 "dma_device_type": 1 00:22:05.696 }, 00:22:05.696 { 00:22:05.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:05.696 "dma_device_type": 2 00:22:05.696 } 00:22:05.696 ], 00:22:05.696 "driver_specific": {} 00:22:05.696 } 00:22:05.696 ] 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.696 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:05.954 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:05.954 "name": "Existed_Raid", 00:22:05.954 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:22:05.954 "strip_size_kb": 64, 00:22:05.954 "state": "online", 00:22:05.954 "raid_level": "raid0", 00:22:05.954 "superblock": true, 00:22:05.954 "num_base_bdevs": 4, 00:22:05.954 "num_base_bdevs_discovered": 4, 00:22:05.954 "num_base_bdevs_operational": 4, 00:22:05.954 "base_bdevs_list": [ 00:22:05.954 { 00:22:05.954 "name": "NewBaseBdev", 00:22:05.954 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:22:05.954 "is_configured": true, 00:22:05.954 "data_offset": 2048, 00:22:05.954 "data_size": 63488 00:22:05.954 }, 00:22:05.954 { 00:22:05.954 "name": "BaseBdev2", 00:22:05.954 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:22:05.954 "is_configured": true, 00:22:05.954 "data_offset": 2048, 00:22:05.954 "data_size": 63488 00:22:05.954 }, 00:22:05.954 { 00:22:05.954 "name": "BaseBdev3", 00:22:05.954 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:22:05.954 "is_configured": true, 00:22:05.954 "data_offset": 2048, 00:22:05.954 "data_size": 63488 00:22:05.954 }, 00:22:05.954 { 00:22:05.954 "name": "BaseBdev4", 00:22:05.954 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:22:05.954 "is_configured": true, 00:22:05.954 "data_offset": 2048, 00:22:05.954 "data_size": 63488 00:22:05.954 } 00:22:05.954 ] 00:22:05.954 }' 00:22:05.954 12:04:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:05.954 12:04:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:06.888 [2024-07-21 12:04:05.676286] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:06.888 "name": "Existed_Raid", 00:22:06.888 "aliases": [ 00:22:06.888 "ac6390a0-3573-47fa-9080-b6b19ec9917f" 00:22:06.888 ], 00:22:06.888 "product_name": "Raid Volume", 00:22:06.888 "block_size": 512, 00:22:06.888 "num_blocks": 253952, 00:22:06.888 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:22:06.888 "assigned_rate_limits": { 00:22:06.888 "rw_ios_per_sec": 0, 00:22:06.888 "rw_mbytes_per_sec": 0, 00:22:06.888 "r_mbytes_per_sec": 0, 00:22:06.888 "w_mbytes_per_sec": 0 00:22:06.888 }, 00:22:06.888 "claimed": false, 00:22:06.888 "zoned": false, 00:22:06.888 "supported_io_types": { 00:22:06.888 "read": true, 00:22:06.888 "write": true, 00:22:06.888 "unmap": true, 00:22:06.888 "write_zeroes": true, 00:22:06.888 "flush": true, 00:22:06.888 "reset": true, 00:22:06.888 "compare": false, 00:22:06.888 "compare_and_write": false, 00:22:06.888 "abort": false, 00:22:06.888 "nvme_admin": false, 00:22:06.888 "nvme_io": false 00:22:06.888 }, 00:22:06.888 "memory_domains": [ 00:22:06.888 { 00:22:06.888 "dma_device_id": "system", 00:22:06.888 "dma_device_type": 1 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.888 "dma_device_type": 2 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "dma_device_id": "system", 00:22:06.888 "dma_device_type": 1 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.888 "dma_device_type": 2 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "dma_device_id": "system", 00:22:06.888 "dma_device_type": 1 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.888 "dma_device_type": 2 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "dma_device_id": "system", 00:22:06.888 "dma_device_type": 1 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:06.888 "dma_device_type": 2 00:22:06.888 } 00:22:06.888 ], 00:22:06.888 "driver_specific": { 00:22:06.888 "raid": { 00:22:06.888 "uuid": "ac6390a0-3573-47fa-9080-b6b19ec9917f", 00:22:06.888 "strip_size_kb": 64, 00:22:06.888 "state": "online", 00:22:06.888 "raid_level": "raid0", 00:22:06.888 "superblock": true, 00:22:06.888 "num_base_bdevs": 4, 00:22:06.888 "num_base_bdevs_discovered": 4, 00:22:06.888 "num_base_bdevs_operational": 4, 00:22:06.888 "base_bdevs_list": [ 00:22:06.888 { 00:22:06.888 "name": "NewBaseBdev", 00:22:06.888 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:22:06.888 "is_configured": true, 00:22:06.888 "data_offset": 2048, 00:22:06.888 "data_size": 63488 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "name": "BaseBdev2", 00:22:06.888 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:22:06.888 "is_configured": true, 00:22:06.888 "data_offset": 2048, 00:22:06.888 "data_size": 63488 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "name": "BaseBdev3", 00:22:06.888 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:22:06.888 "is_configured": true, 00:22:06.888 "data_offset": 2048, 00:22:06.888 "data_size": 63488 00:22:06.888 }, 00:22:06.888 { 00:22:06.888 "name": "BaseBdev4", 00:22:06.888 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:22:06.888 "is_configured": true, 00:22:06.888 "data_offset": 2048, 00:22:06.888 "data_size": 63488 00:22:06.888 } 00:22:06.888 ] 00:22:06.888 } 00:22:06.888 } 00:22:06.888 }' 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:22:06.888 BaseBdev2 00:22:06.888 BaseBdev3 00:22:06.888 BaseBdev4' 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:22:06.888 12:04:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:07.147 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:07.147 "name": "NewBaseBdev", 00:22:07.147 "aliases": [ 00:22:07.147 "8399dfd5-d668-4fc9-abc5-0c93e3d4a231" 00:22:07.147 ], 00:22:07.147 "product_name": "Malloc disk", 00:22:07.147 "block_size": 512, 00:22:07.147 "num_blocks": 65536, 00:22:07.147 "uuid": "8399dfd5-d668-4fc9-abc5-0c93e3d4a231", 00:22:07.147 "assigned_rate_limits": { 00:22:07.147 "rw_ios_per_sec": 0, 00:22:07.147 "rw_mbytes_per_sec": 0, 00:22:07.147 "r_mbytes_per_sec": 0, 00:22:07.147 "w_mbytes_per_sec": 0 00:22:07.147 }, 00:22:07.147 "claimed": true, 00:22:07.147 "claim_type": "exclusive_write", 00:22:07.147 "zoned": false, 00:22:07.147 "supported_io_types": { 00:22:07.147 "read": true, 00:22:07.147 "write": true, 00:22:07.147 "unmap": true, 00:22:07.147 "write_zeroes": true, 00:22:07.147 "flush": true, 00:22:07.147 "reset": true, 00:22:07.147 "compare": false, 00:22:07.147 "compare_and_write": false, 00:22:07.147 "abort": true, 00:22:07.147 "nvme_admin": false, 00:22:07.147 "nvme_io": false 00:22:07.147 }, 00:22:07.147 "memory_domains": [ 00:22:07.147 { 00:22:07.147 "dma_device_id": "system", 00:22:07.147 "dma_device_type": 1 00:22:07.147 }, 00:22:07.147 { 00:22:07.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.147 "dma_device_type": 2 00:22:07.147 } 00:22:07.147 ], 00:22:07.147 "driver_specific": {} 00:22:07.147 }' 00:22:07.405 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.406 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.406 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:07.406 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.406 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:07.406 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:07.406 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.406 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:07.664 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:07.664 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.664 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:07.664 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:07.664 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:07.664 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:07.664 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:07.923 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:07.923 "name": "BaseBdev2", 00:22:07.923 "aliases": [ 00:22:07.923 "749b90cb-eb75-4394-8f1b-5741da579a88" 00:22:07.923 ], 00:22:07.923 "product_name": "Malloc disk", 00:22:07.923 "block_size": 512, 00:22:07.923 "num_blocks": 65536, 00:22:07.923 "uuid": "749b90cb-eb75-4394-8f1b-5741da579a88", 00:22:07.923 "assigned_rate_limits": { 00:22:07.923 "rw_ios_per_sec": 0, 00:22:07.923 "rw_mbytes_per_sec": 0, 00:22:07.923 "r_mbytes_per_sec": 0, 00:22:07.923 "w_mbytes_per_sec": 0 00:22:07.923 }, 00:22:07.923 "claimed": true, 00:22:07.923 "claim_type": "exclusive_write", 00:22:07.923 "zoned": false, 00:22:07.923 "supported_io_types": { 00:22:07.923 "read": true, 00:22:07.923 "write": true, 00:22:07.923 "unmap": true, 00:22:07.923 "write_zeroes": true, 00:22:07.923 "flush": true, 00:22:07.923 "reset": true, 00:22:07.923 "compare": false, 00:22:07.923 "compare_and_write": false, 00:22:07.923 "abort": true, 00:22:07.923 "nvme_admin": false, 00:22:07.923 "nvme_io": false 00:22:07.923 }, 00:22:07.923 "memory_domains": [ 00:22:07.923 { 00:22:07.923 "dma_device_id": "system", 00:22:07.923 "dma_device_type": 1 00:22:07.923 }, 00:22:07.923 { 00:22:07.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:07.923 "dma_device_type": 2 00:22:07.923 } 00:22:07.923 ], 00:22:07.923 "driver_specific": {} 00:22:07.923 }' 00:22:07.923 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.923 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:07.923 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:07.923 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:08.182 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:08.182 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:08.182 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:08.182 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:08.182 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:08.182 12:04:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:08.182 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:08.441 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:08.441 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:08.441 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:08.441 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:08.699 "name": "BaseBdev3", 00:22:08.699 "aliases": [ 00:22:08.699 "edda7e1d-068a-4bd8-9775-b639d1c6885e" 00:22:08.699 ], 00:22:08.699 "product_name": "Malloc disk", 00:22:08.699 "block_size": 512, 00:22:08.699 "num_blocks": 65536, 00:22:08.699 "uuid": "edda7e1d-068a-4bd8-9775-b639d1c6885e", 00:22:08.699 "assigned_rate_limits": { 00:22:08.699 "rw_ios_per_sec": 0, 00:22:08.699 "rw_mbytes_per_sec": 0, 00:22:08.699 "r_mbytes_per_sec": 0, 00:22:08.699 "w_mbytes_per_sec": 0 00:22:08.699 }, 00:22:08.699 "claimed": true, 00:22:08.699 "claim_type": "exclusive_write", 00:22:08.699 "zoned": false, 00:22:08.699 "supported_io_types": { 00:22:08.699 "read": true, 00:22:08.699 "write": true, 00:22:08.699 "unmap": true, 00:22:08.699 "write_zeroes": true, 00:22:08.699 "flush": true, 00:22:08.699 "reset": true, 00:22:08.699 "compare": false, 00:22:08.699 "compare_and_write": false, 00:22:08.699 "abort": true, 00:22:08.699 "nvme_admin": false, 00:22:08.699 "nvme_io": false 00:22:08.699 }, 00:22:08.699 "memory_domains": [ 00:22:08.699 { 00:22:08.699 "dma_device_id": "system", 00:22:08.699 "dma_device_type": 1 00:22:08.699 }, 00:22:08.699 { 00:22:08.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:08.699 "dma_device_type": 2 00:22:08.699 } 00:22:08.699 ], 00:22:08.699 "driver_specific": {} 00:22:08.699 }' 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:08.699 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:08.958 12:04:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:09.217 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:09.217 "name": "BaseBdev4", 00:22:09.217 "aliases": [ 00:22:09.217 "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116" 00:22:09.217 ], 00:22:09.217 "product_name": "Malloc disk", 00:22:09.217 "block_size": 512, 00:22:09.217 "num_blocks": 65536, 00:22:09.217 "uuid": "ee8d5c51-e9ff-43fa-acb9-c7a5a9294116", 00:22:09.217 "assigned_rate_limits": { 00:22:09.217 "rw_ios_per_sec": 0, 00:22:09.217 "rw_mbytes_per_sec": 0, 00:22:09.217 "r_mbytes_per_sec": 0, 00:22:09.217 "w_mbytes_per_sec": 0 00:22:09.217 }, 00:22:09.217 "claimed": true, 00:22:09.217 "claim_type": "exclusive_write", 00:22:09.217 "zoned": false, 00:22:09.217 "supported_io_types": { 00:22:09.217 "read": true, 00:22:09.217 "write": true, 00:22:09.217 "unmap": true, 00:22:09.217 "write_zeroes": true, 00:22:09.217 "flush": true, 00:22:09.217 "reset": true, 00:22:09.217 "compare": false, 00:22:09.217 "compare_and_write": false, 00:22:09.217 "abort": true, 00:22:09.217 "nvme_admin": false, 00:22:09.217 "nvme_io": false 00:22:09.217 }, 00:22:09.217 "memory_domains": [ 00:22:09.217 { 00:22:09.217 "dma_device_id": "system", 00:22:09.217 "dma_device_type": 1 00:22:09.217 }, 00:22:09.217 { 00:22:09.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:09.217 "dma_device_type": 2 00:22:09.217 } 00:22:09.217 ], 00:22:09.217 "driver_specific": {} 00:22:09.217 }' 00:22:09.217 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.217 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:09.476 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.735 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:09.735 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:09.735 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:09.994 [2024-07-21 12:04:08.682056] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:09.994 [2024-07-21 12:04:08.682131] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:09.994 [2024-07-21 12:04:08.682250] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:09.994 [2024-07-21 12:04:08.682353] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:09.994 [2024-07-21 12:04:08.682367] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 145743 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 145743 ']' 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 145743 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 145743 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 145743' 00:22:09.994 killing process with pid 145743 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 145743 00:22:09.994 [2024-07-21 12:04:08.719228] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:09.994 12:04:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 145743 00:22:09.994 [2024-07-21 12:04:08.762290] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:10.252 12:04:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:22:10.252 00:22:10.252 real 0m34.210s 00:22:10.252 user 1m5.097s 00:22:10.252 sys 0m4.068s 00:22:10.252 12:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:10.252 12:04:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:22:10.252 ************************************ 00:22:10.252 END TEST raid_state_function_test_sb 00:22:10.252 ************************************ 00:22:10.252 12:04:09 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:22:10.252 12:04:09 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:22:10.252 12:04:09 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:10.252 12:04:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:10.252 ************************************ 00:22:10.252 START TEST raid_superblock_test 00:22:10.252 ************************************ 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid0 4 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:22:10.252 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=146845 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 146845 /var/tmp/spdk-raid.sock 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 146845 ']' 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:10.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:10.253 12:04:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:10.511 [2024-07-21 12:04:09.130485] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:10.511 [2024-07-21 12:04:09.131453] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146845 ] 00:22:10.511 [2024-07-21 12:04:09.293498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.770 [2024-07-21 12:04:09.397766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.770 [2024-07-21 12:04:09.457685] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:11.337 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:11.596 malloc1 00:22:11.596 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:11.855 [2024-07-21 12:04:10.616863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:11.855 [2024-07-21 12:04:10.617012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.855 [2024-07-21 12:04:10.617067] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:22:11.855 [2024-07-21 12:04:10.617119] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.855 [2024-07-21 12:04:10.619992] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.855 [2024-07-21 12:04:10.620065] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:11.855 pt1 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:11.855 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:12.115 malloc2 00:22:12.115 12:04:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:12.374 [2024-07-21 12:04:11.132732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:12.374 [2024-07-21 12:04:11.132876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.374 [2024-07-21 12:04:11.132943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:22:12.374 [2024-07-21 12:04:11.132986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.374 [2024-07-21 12:04:11.135625] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.374 [2024-07-21 12:04:11.135680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:12.374 pt2 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.374 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:12.633 malloc3 00:22:12.633 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:12.892 [2024-07-21 12:04:11.651464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:12.892 [2024-07-21 12:04:11.651599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.892 [2024-07-21 12:04:11.651656] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:22:12.892 [2024-07-21 12:04:11.651713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.892 [2024-07-21 12:04:11.654258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.892 [2024-07-21 12:04:11.654323] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:12.892 pt3 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:12.892 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:22:13.151 malloc4 00:22:13.151 12:04:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:13.410 [2024-07-21 12:04:12.126891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:13.410 [2024-07-21 12:04:12.127062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.410 [2024-07-21 12:04:12.127106] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:22:13.410 [2024-07-21 12:04:12.127160] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.410 [2024-07-21 12:04:12.129794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.410 [2024-07-21 12:04:12.129875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:13.410 pt4 00:22:13.410 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:22:13.410 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:22:13.410 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:22:13.669 [2024-07-21 12:04:12.359038] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:13.670 [2024-07-21 12:04:12.361370] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:13.670 [2024-07-21 12:04:12.361472] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:13.670 [2024-07-21 12:04:12.361539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:13.670 [2024-07-21 12:04:12.361826] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:22:13.670 [2024-07-21 12:04:12.361854] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:13.670 [2024-07-21 12:04:12.362050] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:13.670 [2024-07-21 12:04:12.362519] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:22:13.670 [2024-07-21 12:04:12.362546] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:22:13.670 [2024-07-21 12:04:12.362754] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.670 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.928 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:13.928 "name": "raid_bdev1", 00:22:13.928 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:13.928 "strip_size_kb": 64, 00:22:13.928 "state": "online", 00:22:13.928 "raid_level": "raid0", 00:22:13.928 "superblock": true, 00:22:13.928 "num_base_bdevs": 4, 00:22:13.928 "num_base_bdevs_discovered": 4, 00:22:13.928 "num_base_bdevs_operational": 4, 00:22:13.928 "base_bdevs_list": [ 00:22:13.928 { 00:22:13.928 "name": "pt1", 00:22:13.928 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:13.928 "is_configured": true, 00:22:13.928 "data_offset": 2048, 00:22:13.928 "data_size": 63488 00:22:13.928 }, 00:22:13.928 { 00:22:13.928 "name": "pt2", 00:22:13.928 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:13.928 "is_configured": true, 00:22:13.928 "data_offset": 2048, 00:22:13.928 "data_size": 63488 00:22:13.928 }, 00:22:13.928 { 00:22:13.928 "name": "pt3", 00:22:13.928 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:13.928 "is_configured": true, 00:22:13.928 "data_offset": 2048, 00:22:13.928 "data_size": 63488 00:22:13.928 }, 00:22:13.928 { 00:22:13.928 "name": "pt4", 00:22:13.928 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:13.928 "is_configured": true, 00:22:13.928 "data_offset": 2048, 00:22:13.928 "data_size": 63488 00:22:13.928 } 00:22:13.928 ] 00:22:13.928 }' 00:22:13.928 12:04:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:13.928 12:04:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:14.492 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:14.750 [2024-07-21 12:04:13.475552] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:14.750 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:14.750 "name": "raid_bdev1", 00:22:14.750 "aliases": [ 00:22:14.750 "d8eec0ca-b884-4dc1-a666-415a04bfa10b" 00:22:14.750 ], 00:22:14.750 "product_name": "Raid Volume", 00:22:14.750 "block_size": 512, 00:22:14.750 "num_blocks": 253952, 00:22:14.750 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:14.750 "assigned_rate_limits": { 00:22:14.750 "rw_ios_per_sec": 0, 00:22:14.750 "rw_mbytes_per_sec": 0, 00:22:14.750 "r_mbytes_per_sec": 0, 00:22:14.751 "w_mbytes_per_sec": 0 00:22:14.751 }, 00:22:14.751 "claimed": false, 00:22:14.751 "zoned": false, 00:22:14.751 "supported_io_types": { 00:22:14.751 "read": true, 00:22:14.751 "write": true, 00:22:14.751 "unmap": true, 00:22:14.751 "write_zeroes": true, 00:22:14.751 "flush": true, 00:22:14.751 "reset": true, 00:22:14.751 "compare": false, 00:22:14.751 "compare_and_write": false, 00:22:14.751 "abort": false, 00:22:14.751 "nvme_admin": false, 00:22:14.751 "nvme_io": false 00:22:14.751 }, 00:22:14.751 "memory_domains": [ 00:22:14.751 { 00:22:14.751 "dma_device_id": "system", 00:22:14.751 "dma_device_type": 1 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.751 "dma_device_type": 2 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "dma_device_id": "system", 00:22:14.751 "dma_device_type": 1 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.751 "dma_device_type": 2 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "dma_device_id": "system", 00:22:14.751 "dma_device_type": 1 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.751 "dma_device_type": 2 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "dma_device_id": "system", 00:22:14.751 "dma_device_type": 1 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:14.751 "dma_device_type": 2 00:22:14.751 } 00:22:14.751 ], 00:22:14.751 "driver_specific": { 00:22:14.751 "raid": { 00:22:14.751 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:14.751 "strip_size_kb": 64, 00:22:14.751 "state": "online", 00:22:14.751 "raid_level": "raid0", 00:22:14.751 "superblock": true, 00:22:14.751 "num_base_bdevs": 4, 00:22:14.751 "num_base_bdevs_discovered": 4, 00:22:14.751 "num_base_bdevs_operational": 4, 00:22:14.751 "base_bdevs_list": [ 00:22:14.751 { 00:22:14.751 "name": "pt1", 00:22:14.751 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:14.751 "is_configured": true, 00:22:14.751 "data_offset": 2048, 00:22:14.751 "data_size": 63488 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "name": "pt2", 00:22:14.751 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:14.751 "is_configured": true, 00:22:14.751 "data_offset": 2048, 00:22:14.751 "data_size": 63488 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "name": "pt3", 00:22:14.751 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:14.751 "is_configured": true, 00:22:14.751 "data_offset": 2048, 00:22:14.751 "data_size": 63488 00:22:14.751 }, 00:22:14.751 { 00:22:14.751 "name": "pt4", 00:22:14.751 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:14.751 "is_configured": true, 00:22:14.751 "data_offset": 2048, 00:22:14.751 "data_size": 63488 00:22:14.751 } 00:22:14.751 ] 00:22:14.751 } 00:22:14.751 } 00:22:14.751 }' 00:22:14.751 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:14.751 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:14.751 pt2 00:22:14.751 pt3 00:22:14.751 pt4' 00:22:14.751 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:14.751 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:14.751 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.020 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.020 "name": "pt1", 00:22:15.020 "aliases": [ 00:22:15.020 "7c052d85-f015-5bb7-b747-6a2b1c88143f" 00:22:15.020 ], 00:22:15.020 "product_name": "passthru", 00:22:15.020 "block_size": 512, 00:22:15.020 "num_blocks": 65536, 00:22:15.020 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:15.020 "assigned_rate_limits": { 00:22:15.020 "rw_ios_per_sec": 0, 00:22:15.020 "rw_mbytes_per_sec": 0, 00:22:15.020 "r_mbytes_per_sec": 0, 00:22:15.020 "w_mbytes_per_sec": 0 00:22:15.020 }, 00:22:15.020 "claimed": true, 00:22:15.020 "claim_type": "exclusive_write", 00:22:15.020 "zoned": false, 00:22:15.020 "supported_io_types": { 00:22:15.020 "read": true, 00:22:15.020 "write": true, 00:22:15.020 "unmap": true, 00:22:15.020 "write_zeroes": true, 00:22:15.020 "flush": true, 00:22:15.020 "reset": true, 00:22:15.020 "compare": false, 00:22:15.020 "compare_and_write": false, 00:22:15.020 "abort": true, 00:22:15.020 "nvme_admin": false, 00:22:15.020 "nvme_io": false 00:22:15.020 }, 00:22:15.020 "memory_domains": [ 00:22:15.020 { 00:22:15.020 "dma_device_id": "system", 00:22:15.020 "dma_device_type": 1 00:22:15.020 }, 00:22:15.020 { 00:22:15.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.020 "dma_device_type": 2 00:22:15.020 } 00:22:15.020 ], 00:22:15.020 "driver_specific": { 00:22:15.020 "passthru": { 00:22:15.020 "name": "pt1", 00:22:15.020 "base_bdev_name": "malloc1" 00:22:15.020 } 00:22:15.020 } 00:22:15.020 }' 00:22:15.020 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.020 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.290 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.290 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.290 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.290 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.290 12:04:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.290 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.290 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:15.290 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.290 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:15.547 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:15.547 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:15.547 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:15.547 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:15.804 "name": "pt2", 00:22:15.804 "aliases": [ 00:22:15.804 "b2ea1213-7d5f-5466-8658-04347f09febc" 00:22:15.804 ], 00:22:15.804 "product_name": "passthru", 00:22:15.804 "block_size": 512, 00:22:15.804 "num_blocks": 65536, 00:22:15.804 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:15.804 "assigned_rate_limits": { 00:22:15.804 "rw_ios_per_sec": 0, 00:22:15.804 "rw_mbytes_per_sec": 0, 00:22:15.804 "r_mbytes_per_sec": 0, 00:22:15.804 "w_mbytes_per_sec": 0 00:22:15.804 }, 00:22:15.804 "claimed": true, 00:22:15.804 "claim_type": "exclusive_write", 00:22:15.804 "zoned": false, 00:22:15.804 "supported_io_types": { 00:22:15.804 "read": true, 00:22:15.804 "write": true, 00:22:15.804 "unmap": true, 00:22:15.804 "write_zeroes": true, 00:22:15.804 "flush": true, 00:22:15.804 "reset": true, 00:22:15.804 "compare": false, 00:22:15.804 "compare_and_write": false, 00:22:15.804 "abort": true, 00:22:15.804 "nvme_admin": false, 00:22:15.804 "nvme_io": false 00:22:15.804 }, 00:22:15.804 "memory_domains": [ 00:22:15.804 { 00:22:15.804 "dma_device_id": "system", 00:22:15.804 "dma_device_type": 1 00:22:15.804 }, 00:22:15.804 { 00:22:15.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:15.804 "dma_device_type": 2 00:22:15.804 } 00:22:15.804 ], 00:22:15.804 "driver_specific": { 00:22:15.804 "passthru": { 00:22:15.804 "name": "pt2", 00:22:15.804 "base_bdev_name": "malloc2" 00:22:15.804 } 00:22:15.804 } 00:22:15.804 }' 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:15.804 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.061 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.061 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.061 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.061 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.061 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:16.061 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:16.061 12:04:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:16.318 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:16.318 "name": "pt3", 00:22:16.318 "aliases": [ 00:22:16.318 "c2f41704-e072-5dd7-9152-ef76e4e7904e" 00:22:16.318 ], 00:22:16.318 "product_name": "passthru", 00:22:16.318 "block_size": 512, 00:22:16.318 "num_blocks": 65536, 00:22:16.318 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:16.318 "assigned_rate_limits": { 00:22:16.318 "rw_ios_per_sec": 0, 00:22:16.318 "rw_mbytes_per_sec": 0, 00:22:16.318 "r_mbytes_per_sec": 0, 00:22:16.318 "w_mbytes_per_sec": 0 00:22:16.318 }, 00:22:16.318 "claimed": true, 00:22:16.318 "claim_type": "exclusive_write", 00:22:16.318 "zoned": false, 00:22:16.318 "supported_io_types": { 00:22:16.318 "read": true, 00:22:16.318 "write": true, 00:22:16.318 "unmap": true, 00:22:16.318 "write_zeroes": true, 00:22:16.318 "flush": true, 00:22:16.318 "reset": true, 00:22:16.318 "compare": false, 00:22:16.318 "compare_and_write": false, 00:22:16.318 "abort": true, 00:22:16.318 "nvme_admin": false, 00:22:16.318 "nvme_io": false 00:22:16.318 }, 00:22:16.318 "memory_domains": [ 00:22:16.318 { 00:22:16.318 "dma_device_id": "system", 00:22:16.318 "dma_device_type": 1 00:22:16.318 }, 00:22:16.318 { 00:22:16.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:16.318 "dma_device_type": 2 00:22:16.318 } 00:22:16.318 ], 00:22:16.318 "driver_specific": { 00:22:16.318 "passthru": { 00:22:16.318 "name": "pt3", 00:22:16.318 "base_bdev_name": "malloc3" 00:22:16.318 } 00:22:16.318 } 00:22:16.318 }' 00:22:16.318 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.318 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:16.318 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:16.318 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.574 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:16.574 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:16.574 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.574 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:16.574 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:16.574 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.574 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:16.831 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:16.831 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:16.831 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:16.831 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:17.088 "name": "pt4", 00:22:17.088 "aliases": [ 00:22:17.088 "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9" 00:22:17.088 ], 00:22:17.088 "product_name": "passthru", 00:22:17.088 "block_size": 512, 00:22:17.088 "num_blocks": 65536, 00:22:17.088 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:17.088 "assigned_rate_limits": { 00:22:17.088 "rw_ios_per_sec": 0, 00:22:17.088 "rw_mbytes_per_sec": 0, 00:22:17.088 "r_mbytes_per_sec": 0, 00:22:17.088 "w_mbytes_per_sec": 0 00:22:17.088 }, 00:22:17.088 "claimed": true, 00:22:17.088 "claim_type": "exclusive_write", 00:22:17.088 "zoned": false, 00:22:17.088 "supported_io_types": { 00:22:17.088 "read": true, 00:22:17.088 "write": true, 00:22:17.088 "unmap": true, 00:22:17.088 "write_zeroes": true, 00:22:17.088 "flush": true, 00:22:17.088 "reset": true, 00:22:17.088 "compare": false, 00:22:17.088 "compare_and_write": false, 00:22:17.088 "abort": true, 00:22:17.088 "nvme_admin": false, 00:22:17.088 "nvme_io": false 00:22:17.088 }, 00:22:17.088 "memory_domains": [ 00:22:17.088 { 00:22:17.088 "dma_device_id": "system", 00:22:17.088 "dma_device_type": 1 00:22:17.088 }, 00:22:17.088 { 00:22:17.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:17.088 "dma_device_type": 2 00:22:17.088 } 00:22:17.088 ], 00:22:17.088 "driver_specific": { 00:22:17.088 "passthru": { 00:22:17.088 "name": "pt4", 00:22:17.088 "base_bdev_name": "malloc4" 00:22:17.088 } 00:22:17.088 } 00:22:17.088 }' 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:17.088 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.346 12:04:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:17.346 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:17.346 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.346 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:17.346 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:17.346 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:17.346 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:22:17.603 [2024-07-21 12:04:16.352146] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:17.603 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d8eec0ca-b884-4dc1-a666-415a04bfa10b 00:22:17.603 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d8eec0ca-b884-4dc1-a666-415a04bfa10b ']' 00:22:17.603 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:17.861 [2024-07-21 12:04:16.627953] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:17.861 [2024-07-21 12:04:16.628013] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:17.861 [2024-07-21 12:04:16.628159] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:17.861 [2024-07-21 12:04:16.628278] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:17.861 [2024-07-21 12:04:16.628294] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:22:17.861 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:22:17.861 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:18.119 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:22:18.119 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:22:18.119 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.119 12:04:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:18.377 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.377 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:18.636 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.636 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:18.894 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:22:18.894 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:22:19.152 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:19.152 12:04:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:19.411 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:22:19.670 [2024-07-21 12:04:18.340456] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:19.670 [2024-07-21 12:04:18.342815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:19.670 [2024-07-21 12:04:18.342897] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:19.670 [2024-07-21 12:04:18.342950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:22:19.670 [2024-07-21 12:04:18.343046] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:22:19.670 [2024-07-21 12:04:18.343176] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:22:19.670 [2024-07-21 12:04:18.343229] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:22:19.670 [2024-07-21 12:04:18.343294] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:22:19.670 [2024-07-21 12:04:18.343324] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:19.670 [2024-07-21 12:04:18.343336] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:22:19.670 request: 00:22:19.670 { 00:22:19.670 "name": "raid_bdev1", 00:22:19.670 "raid_level": "raid0", 00:22:19.670 "base_bdevs": [ 00:22:19.670 "malloc1", 00:22:19.670 "malloc2", 00:22:19.670 "malloc3", 00:22:19.670 "malloc4" 00:22:19.670 ], 00:22:19.670 "superblock": false, 00:22:19.670 "strip_size_kb": 64, 00:22:19.670 "method": "bdev_raid_create", 00:22:19.670 "req_id": 1 00:22:19.670 } 00:22:19.670 Got JSON-RPC error response 00:22:19.670 response: 00:22:19.670 { 00:22:19.670 "code": -17, 00:22:19.670 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:19.670 } 00:22:19.670 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:22:19.670 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:19.670 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:19.670 12:04:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:19.670 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:22:19.670 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.928 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:22:19.928 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:22:19.928 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:20.188 [2024-07-21 12:04:18.872464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:20.188 [2024-07-21 12:04:18.872614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.188 [2024-07-21 12:04:18.872659] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:22:20.188 [2024-07-21 12:04:18.872692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.188 [2024-07-21 12:04:18.875387] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.188 [2024-07-21 12:04:18.875483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:20.188 [2024-07-21 12:04:18.875613] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:22:20.188 [2024-07-21 12:04:18.875675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:20.188 pt1 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:20.188 12:04:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:20.447 12:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:20.447 "name": "raid_bdev1", 00:22:20.447 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:20.447 "strip_size_kb": 64, 00:22:20.447 "state": "configuring", 00:22:20.447 "raid_level": "raid0", 00:22:20.447 "superblock": true, 00:22:20.447 "num_base_bdevs": 4, 00:22:20.447 "num_base_bdevs_discovered": 1, 00:22:20.447 "num_base_bdevs_operational": 4, 00:22:20.447 "base_bdevs_list": [ 00:22:20.447 { 00:22:20.447 "name": "pt1", 00:22:20.447 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:20.447 "is_configured": true, 00:22:20.447 "data_offset": 2048, 00:22:20.447 "data_size": 63488 00:22:20.447 }, 00:22:20.447 { 00:22:20.447 "name": null, 00:22:20.447 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:20.447 "is_configured": false, 00:22:20.447 "data_offset": 2048, 00:22:20.447 "data_size": 63488 00:22:20.447 }, 00:22:20.447 { 00:22:20.447 "name": null, 00:22:20.447 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:20.447 "is_configured": false, 00:22:20.447 "data_offset": 2048, 00:22:20.447 "data_size": 63488 00:22:20.447 }, 00:22:20.447 { 00:22:20.447 "name": null, 00:22:20.447 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:20.447 "is_configured": false, 00:22:20.447 "data_offset": 2048, 00:22:20.447 "data_size": 63488 00:22:20.447 } 00:22:20.447 ] 00:22:20.447 }' 00:22:20.447 12:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:20.447 12:04:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:21.015 12:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:22:21.015 12:04:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:21.274 [2024-07-21 12:04:20.036729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:21.274 [2024-07-21 12:04:20.036872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.274 [2024-07-21 12:04:20.036926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:22:21.274 [2024-07-21 12:04:20.036954] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.274 [2024-07-21 12:04:20.037454] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.274 [2024-07-21 12:04:20.037515] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:21.274 [2024-07-21 12:04:20.037617] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:21.274 [2024-07-21 12:04:20.037647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:21.274 pt2 00:22:21.274 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:21.532 [2024-07-21 12:04:20.312878] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.532 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.791 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:21.791 "name": "raid_bdev1", 00:22:21.791 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:21.791 "strip_size_kb": 64, 00:22:21.791 "state": "configuring", 00:22:21.791 "raid_level": "raid0", 00:22:21.791 "superblock": true, 00:22:21.791 "num_base_bdevs": 4, 00:22:21.791 "num_base_bdevs_discovered": 1, 00:22:21.791 "num_base_bdevs_operational": 4, 00:22:21.791 "base_bdevs_list": [ 00:22:21.791 { 00:22:21.791 "name": "pt1", 00:22:21.791 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:21.791 "is_configured": true, 00:22:21.791 "data_offset": 2048, 00:22:21.791 "data_size": 63488 00:22:21.791 }, 00:22:21.791 { 00:22:21.791 "name": null, 00:22:21.791 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:21.791 "is_configured": false, 00:22:21.791 "data_offset": 2048, 00:22:21.791 "data_size": 63488 00:22:21.791 }, 00:22:21.791 { 00:22:21.791 "name": null, 00:22:21.791 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:21.791 "is_configured": false, 00:22:21.791 "data_offset": 2048, 00:22:21.791 "data_size": 63488 00:22:21.791 }, 00:22:21.791 { 00:22:21.791 "name": null, 00:22:21.791 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:21.791 "is_configured": false, 00:22:21.791 "data_offset": 2048, 00:22:21.791 "data_size": 63488 00:22:21.791 } 00:22:21.791 ] 00:22:21.791 }' 00:22:21.791 12:04:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:21.791 12:04:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:22.357 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:22:22.357 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:22.357 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:22.615 [2024-07-21 12:04:21.397081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:22.615 [2024-07-21 12:04:21.397231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.615 [2024-07-21 12:04:21.397280] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:22:22.615 [2024-07-21 12:04:21.397306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.615 [2024-07-21 12:04:21.397796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.615 [2024-07-21 12:04:21.397850] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:22.615 [2024-07-21 12:04:21.397944] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:22:22.615 [2024-07-21 12:04:21.397983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:22.615 pt2 00:22:22.615 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:22.615 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:22.615 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:22.873 [2024-07-21 12:04:21.625102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:22.873 [2024-07-21 12:04:21.625221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.873 [2024-07-21 12:04:21.625260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:22:22.873 [2024-07-21 12:04:21.625291] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.873 [2024-07-21 12:04:21.625793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.873 [2024-07-21 12:04:21.625857] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:22.873 [2024-07-21 12:04:21.625953] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:22:22.873 [2024-07-21 12:04:21.625991] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:22.873 pt3 00:22:22.873 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:22.873 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:22.873 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:22:23.131 [2024-07-21 12:04:21.893175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:22:23.131 [2024-07-21 12:04:21.893320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:23.131 [2024-07-21 12:04:21.893368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:22:23.131 [2024-07-21 12:04:21.893399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:23.131 [2024-07-21 12:04:21.893892] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:23.131 [2024-07-21 12:04:21.893965] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:22:23.131 [2024-07-21 12:04:21.894061] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:22:23.131 [2024-07-21 12:04:21.894099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:22:23.131 [2024-07-21 12:04:21.894266] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:22:23.131 [2024-07-21 12:04:21.894282] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:23.131 [2024-07-21 12:04:21.894364] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:22:23.132 [2024-07-21 12:04:21.894757] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:22:23.132 [2024-07-21 12:04:21.894774] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:22:23.132 [2024-07-21 12:04:21.894887] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.132 pt4 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.132 12:04:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.390 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:23.390 "name": "raid_bdev1", 00:22:23.390 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:23.390 "strip_size_kb": 64, 00:22:23.390 "state": "online", 00:22:23.390 "raid_level": "raid0", 00:22:23.390 "superblock": true, 00:22:23.390 "num_base_bdevs": 4, 00:22:23.390 "num_base_bdevs_discovered": 4, 00:22:23.390 "num_base_bdevs_operational": 4, 00:22:23.390 "base_bdevs_list": [ 00:22:23.390 { 00:22:23.390 "name": "pt1", 00:22:23.390 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:23.390 "is_configured": true, 00:22:23.390 "data_offset": 2048, 00:22:23.390 "data_size": 63488 00:22:23.390 }, 00:22:23.390 { 00:22:23.390 "name": "pt2", 00:22:23.390 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:23.390 "is_configured": true, 00:22:23.390 "data_offset": 2048, 00:22:23.390 "data_size": 63488 00:22:23.390 }, 00:22:23.390 { 00:22:23.390 "name": "pt3", 00:22:23.390 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:23.390 "is_configured": true, 00:22:23.390 "data_offset": 2048, 00:22:23.390 "data_size": 63488 00:22:23.390 }, 00:22:23.390 { 00:22:23.390 "name": "pt4", 00:22:23.390 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:23.390 "is_configured": true, 00:22:23.390 "data_offset": 2048, 00:22:23.390 "data_size": 63488 00:22:23.390 } 00:22:23.390 ] 00:22:23.390 }' 00:22:23.390 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:23.390 12:04:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:23.956 12:04:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:24.214 [2024-07-21 12:04:23.021697] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:24.214 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:24.214 "name": "raid_bdev1", 00:22:24.214 "aliases": [ 00:22:24.214 "d8eec0ca-b884-4dc1-a666-415a04bfa10b" 00:22:24.214 ], 00:22:24.214 "product_name": "Raid Volume", 00:22:24.214 "block_size": 512, 00:22:24.214 "num_blocks": 253952, 00:22:24.214 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:24.214 "assigned_rate_limits": { 00:22:24.214 "rw_ios_per_sec": 0, 00:22:24.214 "rw_mbytes_per_sec": 0, 00:22:24.214 "r_mbytes_per_sec": 0, 00:22:24.214 "w_mbytes_per_sec": 0 00:22:24.214 }, 00:22:24.214 "claimed": false, 00:22:24.214 "zoned": false, 00:22:24.214 "supported_io_types": { 00:22:24.214 "read": true, 00:22:24.214 "write": true, 00:22:24.214 "unmap": true, 00:22:24.214 "write_zeroes": true, 00:22:24.214 "flush": true, 00:22:24.214 "reset": true, 00:22:24.214 "compare": false, 00:22:24.214 "compare_and_write": false, 00:22:24.214 "abort": false, 00:22:24.214 "nvme_admin": false, 00:22:24.214 "nvme_io": false 00:22:24.214 }, 00:22:24.214 "memory_domains": [ 00:22:24.214 { 00:22:24.214 "dma_device_id": "system", 00:22:24.214 "dma_device_type": 1 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.214 "dma_device_type": 2 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "dma_device_id": "system", 00:22:24.214 "dma_device_type": 1 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.214 "dma_device_type": 2 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "dma_device_id": "system", 00:22:24.214 "dma_device_type": 1 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.214 "dma_device_type": 2 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "dma_device_id": "system", 00:22:24.214 "dma_device_type": 1 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.214 "dma_device_type": 2 00:22:24.214 } 00:22:24.214 ], 00:22:24.214 "driver_specific": { 00:22:24.214 "raid": { 00:22:24.214 "uuid": "d8eec0ca-b884-4dc1-a666-415a04bfa10b", 00:22:24.214 "strip_size_kb": 64, 00:22:24.214 "state": "online", 00:22:24.214 "raid_level": "raid0", 00:22:24.214 "superblock": true, 00:22:24.214 "num_base_bdevs": 4, 00:22:24.214 "num_base_bdevs_discovered": 4, 00:22:24.214 "num_base_bdevs_operational": 4, 00:22:24.214 "base_bdevs_list": [ 00:22:24.214 { 00:22:24.214 "name": "pt1", 00:22:24.214 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:24.214 "is_configured": true, 00:22:24.214 "data_offset": 2048, 00:22:24.214 "data_size": 63488 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "name": "pt2", 00:22:24.214 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:24.214 "is_configured": true, 00:22:24.214 "data_offset": 2048, 00:22:24.214 "data_size": 63488 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "name": "pt3", 00:22:24.214 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:24.214 "is_configured": true, 00:22:24.214 "data_offset": 2048, 00:22:24.214 "data_size": 63488 00:22:24.214 }, 00:22:24.214 { 00:22:24.214 "name": "pt4", 00:22:24.214 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:24.214 "is_configured": true, 00:22:24.214 "data_offset": 2048, 00:22:24.214 "data_size": 63488 00:22:24.214 } 00:22:24.214 ] 00:22:24.214 } 00:22:24.214 } 00:22:24.214 }' 00:22:24.214 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:24.473 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:22:24.473 pt2 00:22:24.473 pt3 00:22:24.473 pt4' 00:22:24.473 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:24.473 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:22:24.473 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:24.732 "name": "pt1", 00:22:24.732 "aliases": [ 00:22:24.732 "7c052d85-f015-5bb7-b747-6a2b1c88143f" 00:22:24.732 ], 00:22:24.732 "product_name": "passthru", 00:22:24.732 "block_size": 512, 00:22:24.732 "num_blocks": 65536, 00:22:24.732 "uuid": "7c052d85-f015-5bb7-b747-6a2b1c88143f", 00:22:24.732 "assigned_rate_limits": { 00:22:24.732 "rw_ios_per_sec": 0, 00:22:24.732 "rw_mbytes_per_sec": 0, 00:22:24.732 "r_mbytes_per_sec": 0, 00:22:24.732 "w_mbytes_per_sec": 0 00:22:24.732 }, 00:22:24.732 "claimed": true, 00:22:24.732 "claim_type": "exclusive_write", 00:22:24.732 "zoned": false, 00:22:24.732 "supported_io_types": { 00:22:24.732 "read": true, 00:22:24.732 "write": true, 00:22:24.732 "unmap": true, 00:22:24.732 "write_zeroes": true, 00:22:24.732 "flush": true, 00:22:24.732 "reset": true, 00:22:24.732 "compare": false, 00:22:24.732 "compare_and_write": false, 00:22:24.732 "abort": true, 00:22:24.732 "nvme_admin": false, 00:22:24.732 "nvme_io": false 00:22:24.732 }, 00:22:24.732 "memory_domains": [ 00:22:24.732 { 00:22:24.732 "dma_device_id": "system", 00:22:24.732 "dma_device_type": 1 00:22:24.732 }, 00:22:24.732 { 00:22:24.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:24.732 "dma_device_type": 2 00:22:24.732 } 00:22:24.732 ], 00:22:24.732 "driver_specific": { 00:22:24.732 "passthru": { 00:22:24.732 "name": "pt1", 00:22:24.732 "base_bdev_name": "malloc1" 00:22:24.732 } 00:22:24.732 } 00:22:24.732 }' 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:24.732 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:22:24.989 12:04:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:25.246 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:25.246 "name": "pt2", 00:22:25.246 "aliases": [ 00:22:25.246 "b2ea1213-7d5f-5466-8658-04347f09febc" 00:22:25.246 ], 00:22:25.246 "product_name": "passthru", 00:22:25.246 "block_size": 512, 00:22:25.246 "num_blocks": 65536, 00:22:25.246 "uuid": "b2ea1213-7d5f-5466-8658-04347f09febc", 00:22:25.246 "assigned_rate_limits": { 00:22:25.246 "rw_ios_per_sec": 0, 00:22:25.246 "rw_mbytes_per_sec": 0, 00:22:25.246 "r_mbytes_per_sec": 0, 00:22:25.246 "w_mbytes_per_sec": 0 00:22:25.246 }, 00:22:25.246 "claimed": true, 00:22:25.246 "claim_type": "exclusive_write", 00:22:25.246 "zoned": false, 00:22:25.246 "supported_io_types": { 00:22:25.246 "read": true, 00:22:25.246 "write": true, 00:22:25.246 "unmap": true, 00:22:25.246 "write_zeroes": true, 00:22:25.246 "flush": true, 00:22:25.246 "reset": true, 00:22:25.246 "compare": false, 00:22:25.246 "compare_and_write": false, 00:22:25.246 "abort": true, 00:22:25.246 "nvme_admin": false, 00:22:25.246 "nvme_io": false 00:22:25.246 }, 00:22:25.246 "memory_domains": [ 00:22:25.246 { 00:22:25.246 "dma_device_id": "system", 00:22:25.246 "dma_device_type": 1 00:22:25.246 }, 00:22:25.246 { 00:22:25.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:25.246 "dma_device_type": 2 00:22:25.246 } 00:22:25.246 ], 00:22:25.246 "driver_specific": { 00:22:25.246 "passthru": { 00:22:25.246 "name": "pt2", 00:22:25.246 "base_bdev_name": "malloc2" 00:22:25.246 } 00:22:25.246 } 00:22:25.246 }' 00:22:25.246 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.246 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:25.503 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.759 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:25.759 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:25.759 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:25.759 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:22:25.759 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.016 "name": "pt3", 00:22:26.016 "aliases": [ 00:22:26.016 "c2f41704-e072-5dd7-9152-ef76e4e7904e" 00:22:26.016 ], 00:22:26.016 "product_name": "passthru", 00:22:26.016 "block_size": 512, 00:22:26.016 "num_blocks": 65536, 00:22:26.016 "uuid": "c2f41704-e072-5dd7-9152-ef76e4e7904e", 00:22:26.016 "assigned_rate_limits": { 00:22:26.016 "rw_ios_per_sec": 0, 00:22:26.016 "rw_mbytes_per_sec": 0, 00:22:26.016 "r_mbytes_per_sec": 0, 00:22:26.016 "w_mbytes_per_sec": 0 00:22:26.016 }, 00:22:26.016 "claimed": true, 00:22:26.016 "claim_type": "exclusive_write", 00:22:26.016 "zoned": false, 00:22:26.016 "supported_io_types": { 00:22:26.016 "read": true, 00:22:26.016 "write": true, 00:22:26.016 "unmap": true, 00:22:26.016 "write_zeroes": true, 00:22:26.016 "flush": true, 00:22:26.016 "reset": true, 00:22:26.016 "compare": false, 00:22:26.016 "compare_and_write": false, 00:22:26.016 "abort": true, 00:22:26.016 "nvme_admin": false, 00:22:26.016 "nvme_io": false 00:22:26.016 }, 00:22:26.016 "memory_domains": [ 00:22:26.016 { 00:22:26.016 "dma_device_id": "system", 00:22:26.016 "dma_device_type": 1 00:22:26.016 }, 00:22:26.016 { 00:22:26.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.016 "dma_device_type": 2 00:22:26.016 } 00:22:26.016 ], 00:22:26.016 "driver_specific": { 00:22:26.016 "passthru": { 00:22:26.016 "name": "pt3", 00:22:26.016 "base_bdev_name": "malloc3" 00:22:26.016 } 00:22:26.016 } 00:22:26.016 }' 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:26.016 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.275 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.276 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.276 12:04:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.276 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.276 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:26.276 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:26.276 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:22:26.276 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:26.547 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:26.547 "name": "pt4", 00:22:26.547 "aliases": [ 00:22:26.547 "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9" 00:22:26.547 ], 00:22:26.547 "product_name": "passthru", 00:22:26.547 "block_size": 512, 00:22:26.547 "num_blocks": 65536, 00:22:26.547 "uuid": "0ad32a6a-fcdd-5188-8a7f-ddf8e1cf14b9", 00:22:26.547 "assigned_rate_limits": { 00:22:26.547 "rw_ios_per_sec": 0, 00:22:26.547 "rw_mbytes_per_sec": 0, 00:22:26.547 "r_mbytes_per_sec": 0, 00:22:26.547 "w_mbytes_per_sec": 0 00:22:26.547 }, 00:22:26.547 "claimed": true, 00:22:26.547 "claim_type": "exclusive_write", 00:22:26.547 "zoned": false, 00:22:26.547 "supported_io_types": { 00:22:26.547 "read": true, 00:22:26.547 "write": true, 00:22:26.547 "unmap": true, 00:22:26.547 "write_zeroes": true, 00:22:26.547 "flush": true, 00:22:26.547 "reset": true, 00:22:26.547 "compare": false, 00:22:26.547 "compare_and_write": false, 00:22:26.547 "abort": true, 00:22:26.547 "nvme_admin": false, 00:22:26.547 "nvme_io": false 00:22:26.547 }, 00:22:26.547 "memory_domains": [ 00:22:26.547 { 00:22:26.547 "dma_device_id": "system", 00:22:26.547 "dma_device_type": 1 00:22:26.547 }, 00:22:26.547 { 00:22:26.547 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:26.547 "dma_device_type": 2 00:22:26.547 } 00:22:26.547 ], 00:22:26.547 "driver_specific": { 00:22:26.547 "passthru": { 00:22:26.547 "name": "pt4", 00:22:26.547 "base_bdev_name": "malloc4" 00:22:26.547 } 00:22:26.547 } 00:22:26.547 }' 00:22:26.547 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.547 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:26.818 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:27.075 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:27.075 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:27.075 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:22:27.332 [2024-07-21 12:04:25.975489] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d8eec0ca-b884-4dc1-a666-415a04bfa10b '!=' d8eec0ca-b884-4dc1-a666-415a04bfa10b ']' 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 146845 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 146845 ']' 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 146845 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:22:27.332 12:04:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:27.332 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 146845 00:22:27.332 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:27.332 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:27.332 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 146845' 00:22:27.332 killing process with pid 146845 00:22:27.332 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 146845 00:22:27.332 [2024-07-21 12:04:26.023290] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:27.332 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 146845 00:22:27.332 [2024-07-21 12:04:26.023407] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:27.332 [2024-07-21 12:04:26.023494] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:27.332 [2024-07-21 12:04:26.023507] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:22:27.332 [2024-07-21 12:04:26.073405] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:27.590 12:04:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:22:27.590 ************************************ 00:22:27.590 END TEST raid_superblock_test 00:22:27.590 ************************************ 00:22:27.590 00:22:27.590 real 0m17.269s 00:22:27.590 user 0m32.263s 00:22:27.590 sys 0m2.029s 00:22:27.590 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:27.590 12:04:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.590 12:04:26 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:22:27.590 12:04:26 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:27.590 12:04:26 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:27.590 12:04:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:27.590 ************************************ 00:22:27.590 START TEST raid_read_error_test 00:22:27.590 ************************************ 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 read 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.SgBp8R3oyd 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=147392 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 147392 /var/tmp/spdk-raid.sock 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 147392 ']' 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:27.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:27.590 12:04:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:27.848 [2024-07-21 12:04:26.475698] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:27.848 [2024-07-21 12:04:26.475979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147392 ] 00:22:27.848 [2024-07-21 12:04:26.644075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.106 [2024-07-21 12:04:26.737481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.106 [2024-07-21 12:04:26.793393] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:28.672 12:04:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:28.672 12:04:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:22:28.672 12:04:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:28.672 12:04:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:28.930 BaseBdev1_malloc 00:22:28.930 12:04:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:29.189 true 00:22:29.189 12:04:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:29.447 [2024-07-21 12:04:28.164573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:29.447 [2024-07-21 12:04:28.164935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:29.447 [2024-07-21 12:04:28.165155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:29.447 [2024-07-21 12:04:28.165336] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:29.447 [2024-07-21 12:04:28.168406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:29.447 [2024-07-21 12:04:28.168639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:29.447 BaseBdev1 00:22:29.447 12:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:29.447 12:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:29.705 BaseBdev2_malloc 00:22:29.705 12:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:29.963 true 00:22:29.963 12:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:30.222 [2024-07-21 12:04:28.928756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:30.222 [2024-07-21 12:04:28.929170] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.222 [2024-07-21 12:04:28.929288] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:30.222 [2024-07-21 12:04:28.929526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.222 [2024-07-21 12:04:28.932364] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.222 [2024-07-21 12:04:28.932553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:30.222 BaseBdev2 00:22:30.222 12:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:30.222 12:04:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:30.481 BaseBdev3_malloc 00:22:30.481 12:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:30.738 true 00:22:30.738 12:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:30.997 [2024-07-21 12:04:29.734980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:30.997 [2024-07-21 12:04:29.735387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:30.997 [2024-07-21 12:04:29.735564] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:30.997 [2024-07-21 12:04:29.735722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:30.997 [2024-07-21 12:04:29.738457] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:30.997 [2024-07-21 12:04:29.738683] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:30.997 BaseBdev3 00:22:30.997 12:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:30.997 12:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:31.255 BaseBdev4_malloc 00:22:31.255 12:04:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:31.513 true 00:22:31.513 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:31.771 [2024-07-21 12:04:30.483175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:31.771 [2024-07-21 12:04:30.483471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:31.771 [2024-07-21 12:04:30.483638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:31.771 [2024-07-21 12:04:30.483804] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:31.771 [2024-07-21 12:04:30.486406] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:31.771 [2024-07-21 12:04:30.486631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:31.771 BaseBdev4 00:22:31.771 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:32.029 [2024-07-21 12:04:30.707459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:32.029 [2024-07-21 12:04:30.710010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:32.029 [2024-07-21 12:04:30.710275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:32.029 [2024-07-21 12:04:30.710484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:32.029 [2024-07-21 12:04:30.710935] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:22:32.029 [2024-07-21 12:04:30.711071] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:32.029 [2024-07-21 12:04:30.711281] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:32.029 [2024-07-21 12:04:30.711789] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:22:32.029 [2024-07-21 12:04:30.711922] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:22:32.029 [2024-07-21 12:04:30.712259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:32.029 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.030 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.288 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:32.288 "name": "raid_bdev1", 00:22:32.288 "uuid": "91dcc05d-da4b-4230-bc29-43fbaadf482f", 00:22:32.288 "strip_size_kb": 64, 00:22:32.288 "state": "online", 00:22:32.288 "raid_level": "raid0", 00:22:32.288 "superblock": true, 00:22:32.288 "num_base_bdevs": 4, 00:22:32.288 "num_base_bdevs_discovered": 4, 00:22:32.288 "num_base_bdevs_operational": 4, 00:22:32.288 "base_bdevs_list": [ 00:22:32.288 { 00:22:32.288 "name": "BaseBdev1", 00:22:32.288 "uuid": "4045bbfc-b4e5-5241-ade3-627553640080", 00:22:32.288 "is_configured": true, 00:22:32.288 "data_offset": 2048, 00:22:32.288 "data_size": 63488 00:22:32.288 }, 00:22:32.288 { 00:22:32.288 "name": "BaseBdev2", 00:22:32.288 "uuid": "47db4e0d-575e-5f81-99da-a992e25d48cf", 00:22:32.288 "is_configured": true, 00:22:32.288 "data_offset": 2048, 00:22:32.288 "data_size": 63488 00:22:32.288 }, 00:22:32.288 { 00:22:32.288 "name": "BaseBdev3", 00:22:32.288 "uuid": "3f10741c-84ee-5600-ac07-fdd2cebbf2d1", 00:22:32.288 "is_configured": true, 00:22:32.288 "data_offset": 2048, 00:22:32.288 "data_size": 63488 00:22:32.288 }, 00:22:32.288 { 00:22:32.288 "name": "BaseBdev4", 00:22:32.288 "uuid": "376e6bd7-7ead-52ca-a322-08078d44a982", 00:22:32.288 "is_configured": true, 00:22:32.288 "data_offset": 2048, 00:22:32.288 "data_size": 63488 00:22:32.288 } 00:22:32.288 ] 00:22:32.288 }' 00:22:32.288 12:04:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:32.288 12:04:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:32.855 12:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:32.855 12:04:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:32.855 [2024-07-21 12:04:31.680932] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:33.791 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.050 12:04:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.312 12:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:34.312 "name": "raid_bdev1", 00:22:34.312 "uuid": "91dcc05d-da4b-4230-bc29-43fbaadf482f", 00:22:34.312 "strip_size_kb": 64, 00:22:34.312 "state": "online", 00:22:34.312 "raid_level": "raid0", 00:22:34.312 "superblock": true, 00:22:34.312 "num_base_bdevs": 4, 00:22:34.312 "num_base_bdevs_discovered": 4, 00:22:34.312 "num_base_bdevs_operational": 4, 00:22:34.312 "base_bdevs_list": [ 00:22:34.312 { 00:22:34.312 "name": "BaseBdev1", 00:22:34.312 "uuid": "4045bbfc-b4e5-5241-ade3-627553640080", 00:22:34.312 "is_configured": true, 00:22:34.312 "data_offset": 2048, 00:22:34.312 "data_size": 63488 00:22:34.312 }, 00:22:34.312 { 00:22:34.312 "name": "BaseBdev2", 00:22:34.312 "uuid": "47db4e0d-575e-5f81-99da-a992e25d48cf", 00:22:34.312 "is_configured": true, 00:22:34.312 "data_offset": 2048, 00:22:34.312 "data_size": 63488 00:22:34.312 }, 00:22:34.312 { 00:22:34.312 "name": "BaseBdev3", 00:22:34.312 "uuid": "3f10741c-84ee-5600-ac07-fdd2cebbf2d1", 00:22:34.312 "is_configured": true, 00:22:34.312 "data_offset": 2048, 00:22:34.312 "data_size": 63488 00:22:34.312 }, 00:22:34.312 { 00:22:34.312 "name": "BaseBdev4", 00:22:34.312 "uuid": "376e6bd7-7ead-52ca-a322-08078d44a982", 00:22:34.312 "is_configured": true, 00:22:34.312 "data_offset": 2048, 00:22:34.312 "data_size": 63488 00:22:34.312 } 00:22:34.312 ] 00:22:34.312 }' 00:22:34.312 12:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:34.312 12:04:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.245 12:04:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:35.245 [2024-07-21 12:04:33.984095] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.245 [2024-07-21 12:04:33.984442] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.245 [2024-07-21 12:04:33.987702] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.245 [2024-07-21 12:04:33.987997] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.245 [2024-07-21 12:04:33.988097] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.245 [2024-07-21 12:04:33.988223] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:22:35.245 0 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 147392 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 147392 ']' 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 147392 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147392 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147392' 00:22:35.245 killing process with pid 147392 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 147392 00:22:35.245 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 147392 00:22:35.245 [2024-07-21 12:04:34.032003] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:35.245 [2024-07-21 12:04:34.070425] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.SgBp8R3oyd 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:35.502 ************************************ 00:22:35.502 END TEST raid_read_error_test 00:22:35.502 ************************************ 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:22:35.502 00:22:35.502 real 0m7.938s 00:22:35.502 user 0m13.050s 00:22:35.502 sys 0m1.005s 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:35.502 12:04:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.759 12:04:34 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:22:35.759 12:04:34 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:35.759 12:04:34 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:35.759 12:04:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:35.759 ************************************ 00:22:35.759 START TEST raid_write_error_test 00:22:35.759 ************************************ 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid0 4 write 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:22:35.759 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.LGwr10yiQw 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=147602 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 147602 /var/tmp/spdk-raid.sock 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 147602 ']' 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:35.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:35.760 12:04:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:35.760 [2024-07-21 12:04:34.474893] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:35.760 [2024-07-21 12:04:34.475412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147602 ] 00:22:36.017 [2024-07-21 12:04:34.631105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.017 [2024-07-21 12:04:34.722100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.017 [2024-07-21 12:04:34.777500] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:36.949 12:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:36.949 12:04:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:22:36.949 12:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:36.949 12:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:36.949 BaseBdev1_malloc 00:22:36.949 12:04:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:22:37.205 true 00:22:37.205 12:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:22:37.463 [2024-07-21 12:04:36.272226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:22:37.463 [2024-07-21 12:04:36.272698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.463 [2024-07-21 12:04:36.272928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:37.463 [2024-07-21 12:04:36.273109] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.463 [2024-07-21 12:04:36.276053] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.463 [2024-07-21 12:04:36.276277] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:37.463 BaseBdev1 00:22:37.463 12:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:37.463 12:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:37.721 BaseBdev2_malloc 00:22:37.721 12:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:22:37.979 true 00:22:37.979 12:04:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:22:38.236 [2024-07-21 12:04:36.987998] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:22:38.236 [2024-07-21 12:04:36.988457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.236 [2024-07-21 12:04:36.988697] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:22:38.236 [2024-07-21 12:04:36.988867] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.236 [2024-07-21 12:04:36.991656] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.236 [2024-07-21 12:04:36.991840] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:38.236 BaseBdev2 00:22:38.236 12:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:38.236 12:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:38.494 BaseBdev3_malloc 00:22:38.494 12:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:22:38.752 true 00:22:38.752 12:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:22:39.011 [2024-07-21 12:04:37.765183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:22:39.011 [2024-07-21 12:04:37.765471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.011 [2024-07-21 12:04:37.765566] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:39.011 [2024-07-21 12:04:37.765842] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.011 [2024-07-21 12:04:37.768651] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.011 [2024-07-21 12:04:37.768845] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:39.011 BaseBdev3 00:22:39.011 12:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:22:39.011 12:04:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:39.269 BaseBdev4_malloc 00:22:39.269 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:22:39.528 true 00:22:39.528 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:22:39.787 [2024-07-21 12:04:38.468276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:22:39.787 [2024-07-21 12:04:38.468658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:39.787 [2024-07-21 12:04:38.468827] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:22:39.787 [2024-07-21 12:04:38.468995] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.787 [2024-07-21 12:04:38.471763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.787 [2024-07-21 12:04:38.471959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:39.787 BaseBdev4 00:22:39.787 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:22:40.045 [2024-07-21 12:04:38.728466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:40.045 [2024-07-21 12:04:38.731062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:40.045 [2024-07-21 12:04:38.731319] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:40.045 [2024-07-21 12:04:38.731541] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:40.045 [2024-07-21 12:04:38.731993] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:22:40.045 [2024-07-21 12:04:38.732181] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:22:40.045 [2024-07-21 12:04:38.732399] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:22:40.045 [2024-07-21 12:04:38.732932] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:22:40.045 [2024-07-21 12:04:38.733065] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:22:40.045 [2024-07-21 12:04:38.733400] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.045 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.303 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:40.303 "name": "raid_bdev1", 00:22:40.303 "uuid": "0cee3a99-d916-4f0c-b06c-9df30aaa7b35", 00:22:40.303 "strip_size_kb": 64, 00:22:40.303 "state": "online", 00:22:40.303 "raid_level": "raid0", 00:22:40.303 "superblock": true, 00:22:40.303 "num_base_bdevs": 4, 00:22:40.303 "num_base_bdevs_discovered": 4, 00:22:40.303 "num_base_bdevs_operational": 4, 00:22:40.303 "base_bdevs_list": [ 00:22:40.303 { 00:22:40.303 "name": "BaseBdev1", 00:22:40.303 "uuid": "b5a3feb2-8dbb-5d6c-bc85-e0d045353378", 00:22:40.303 "is_configured": true, 00:22:40.303 "data_offset": 2048, 00:22:40.303 "data_size": 63488 00:22:40.303 }, 00:22:40.303 { 00:22:40.303 "name": "BaseBdev2", 00:22:40.303 "uuid": "0f4bd219-7c93-5fa8-94fb-10396437e2a8", 00:22:40.303 "is_configured": true, 00:22:40.303 "data_offset": 2048, 00:22:40.303 "data_size": 63488 00:22:40.303 }, 00:22:40.303 { 00:22:40.303 "name": "BaseBdev3", 00:22:40.303 "uuid": "837ec6da-7f9e-5e91-94a2-c150b4abc7a0", 00:22:40.303 "is_configured": true, 00:22:40.303 "data_offset": 2048, 00:22:40.303 "data_size": 63488 00:22:40.303 }, 00:22:40.303 { 00:22:40.303 "name": "BaseBdev4", 00:22:40.303 "uuid": "dfe2d46a-45e2-564a-a516-e92c5e301a0f", 00:22:40.303 "is_configured": true, 00:22:40.303 "data_offset": 2048, 00:22:40.303 "data_size": 63488 00:22:40.303 } 00:22:40.303 ] 00:22:40.303 }' 00:22:40.303 12:04:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:40.303 12:04:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:40.870 12:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:22:40.870 12:04:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:40.870 [2024-07-21 12:04:39.682053] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:41.836 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.097 12:04:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.356 12:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:42.356 "name": "raid_bdev1", 00:22:42.356 "uuid": "0cee3a99-d916-4f0c-b06c-9df30aaa7b35", 00:22:42.356 "strip_size_kb": 64, 00:22:42.356 "state": "online", 00:22:42.356 "raid_level": "raid0", 00:22:42.356 "superblock": true, 00:22:42.356 "num_base_bdevs": 4, 00:22:42.356 "num_base_bdevs_discovered": 4, 00:22:42.356 "num_base_bdevs_operational": 4, 00:22:42.356 "base_bdevs_list": [ 00:22:42.356 { 00:22:42.356 "name": "BaseBdev1", 00:22:42.356 "uuid": "b5a3feb2-8dbb-5d6c-bc85-e0d045353378", 00:22:42.356 "is_configured": true, 00:22:42.356 "data_offset": 2048, 00:22:42.356 "data_size": 63488 00:22:42.356 }, 00:22:42.356 { 00:22:42.356 "name": "BaseBdev2", 00:22:42.356 "uuid": "0f4bd219-7c93-5fa8-94fb-10396437e2a8", 00:22:42.356 "is_configured": true, 00:22:42.356 "data_offset": 2048, 00:22:42.356 "data_size": 63488 00:22:42.356 }, 00:22:42.356 { 00:22:42.356 "name": "BaseBdev3", 00:22:42.356 "uuid": "837ec6da-7f9e-5e91-94a2-c150b4abc7a0", 00:22:42.356 "is_configured": true, 00:22:42.356 "data_offset": 2048, 00:22:42.356 "data_size": 63488 00:22:42.356 }, 00:22:42.356 { 00:22:42.356 "name": "BaseBdev4", 00:22:42.356 "uuid": "dfe2d46a-45e2-564a-a516-e92c5e301a0f", 00:22:42.356 "is_configured": true, 00:22:42.356 "data_offset": 2048, 00:22:42.356 "data_size": 63488 00:22:42.356 } 00:22:42.356 ] 00:22:42.356 }' 00:22:42.356 12:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:42.356 12:04:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:42.923 12:04:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:43.182 [2024-07-21 12:04:42.011088] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:43.182 [2024-07-21 12:04:42.011434] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:43.182 [2024-07-21 12:04:42.014362] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:43.182 [2024-07-21 12:04:42.014577] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:43.182 [2024-07-21 12:04:42.014820] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:43.182 [2024-07-21 12:04:42.014951] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:22:43.182 0 00:22:43.182 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 147602 00:22:43.182 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 147602 ']' 00:22:43.182 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 147602 00:22:43.182 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:22:43.182 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:43.182 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147602 00:22:43.441 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:43.441 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:43.441 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147602' 00:22:43.441 killing process with pid 147602 00:22:43.441 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 147602 00:22:43.441 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 147602 00:22:43.441 [2024-07-21 12:04:42.053585] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:43.441 [2024-07-21 12:04:42.089892] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.LGwr10yiQw 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:22:43.700 00:22:43.700 real 0m7.962s 00:22:43.700 user 0m13.141s 00:22:43.700 sys 0m0.929s 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:43.700 12:04:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.700 ************************************ 00:22:43.700 END TEST raid_write_error_test 00:22:43.700 ************************************ 00:22:43.700 12:04:42 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:22:43.700 12:04:42 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:22:43.700 12:04:42 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:22:43.700 12:04:42 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:43.700 12:04:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:22:43.700 ************************************ 00:22:43.700 START TEST raid_state_function_test 00:22:43.700 ************************************ 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 false 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=147807 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:43.700 Process raid pid: 147807 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 147807' 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 147807 /var/tmp/spdk-raid.sock 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 147807 ']' 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:43.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:43.700 12:04:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:43.700 [2024-07-21 12:04:42.487325] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:43.700 [2024-07-21 12:04:42.487781] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.959 [2024-07-21 12:04:42.645839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.959 [2024-07-21 12:04:42.744080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.959 [2024-07-21 12:04:42.800092] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:44.892 [2024-07-21 12:04:43.729500] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.892 [2024-07-21 12:04:43.729873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.892 [2024-07-21 12:04:43.730027] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.892 [2024-07-21 12:04:43.730096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.892 [2024-07-21 12:04:43.730236] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.892 [2024-07-21 12:04:43.730435] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.892 [2024-07-21 12:04:43.730589] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:44.892 [2024-07-21 12:04:43.730743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.892 12:04:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.457 12:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:45.457 "name": "Existed_Raid", 00:22:45.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.457 "strip_size_kb": 64, 00:22:45.457 "state": "configuring", 00:22:45.457 "raid_level": "concat", 00:22:45.457 "superblock": false, 00:22:45.457 "num_base_bdevs": 4, 00:22:45.457 "num_base_bdevs_discovered": 0, 00:22:45.457 "num_base_bdevs_operational": 4, 00:22:45.457 "base_bdevs_list": [ 00:22:45.457 { 00:22:45.457 "name": "BaseBdev1", 00:22:45.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.457 "is_configured": false, 00:22:45.457 "data_offset": 0, 00:22:45.457 "data_size": 0 00:22:45.457 }, 00:22:45.457 { 00:22:45.457 "name": "BaseBdev2", 00:22:45.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.457 "is_configured": false, 00:22:45.457 "data_offset": 0, 00:22:45.457 "data_size": 0 00:22:45.457 }, 00:22:45.457 { 00:22:45.457 "name": "BaseBdev3", 00:22:45.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.457 "is_configured": false, 00:22:45.457 "data_offset": 0, 00:22:45.457 "data_size": 0 00:22:45.457 }, 00:22:45.457 { 00:22:45.457 "name": "BaseBdev4", 00:22:45.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.457 "is_configured": false, 00:22:45.457 "data_offset": 0, 00:22:45.457 "data_size": 0 00:22:45.457 } 00:22:45.457 ] 00:22:45.457 }' 00:22:45.457 12:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:45.457 12:04:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:46.020 12:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:46.020 [2024-07-21 12:04:44.869580] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:46.020 [2024-07-21 12:04:44.869900] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:22:46.278 12:04:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:46.278 [2024-07-21 12:04:45.097636] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:46.278 [2024-07-21 12:04:45.098044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:46.278 [2024-07-21 12:04:45.098159] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.278 [2024-07-21 12:04:45.098261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.278 [2024-07-21 12:04:45.098463] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.278 [2024-07-21 12:04:45.098530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.278 [2024-07-21 12:04:45.098661] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:46.278 [2024-07-21 12:04:45.098742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:46.278 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:46.536 [2024-07-21 12:04:45.345174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.536 BaseBdev1 00:22:46.536 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:22:46.536 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:22:46.536 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:46.536 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:46.536 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:46.536 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:46.536 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:46.794 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:47.052 [ 00:22:47.052 { 00:22:47.052 "name": "BaseBdev1", 00:22:47.052 "aliases": [ 00:22:47.052 "6ff1af34-d85b-4ee3-b38f-07f03257611b" 00:22:47.052 ], 00:22:47.052 "product_name": "Malloc disk", 00:22:47.052 "block_size": 512, 00:22:47.052 "num_blocks": 65536, 00:22:47.052 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:47.052 "assigned_rate_limits": { 00:22:47.052 "rw_ios_per_sec": 0, 00:22:47.052 "rw_mbytes_per_sec": 0, 00:22:47.052 "r_mbytes_per_sec": 0, 00:22:47.052 "w_mbytes_per_sec": 0 00:22:47.052 }, 00:22:47.052 "claimed": true, 00:22:47.052 "claim_type": "exclusive_write", 00:22:47.052 "zoned": false, 00:22:47.052 "supported_io_types": { 00:22:47.052 "read": true, 00:22:47.052 "write": true, 00:22:47.052 "unmap": true, 00:22:47.052 "write_zeroes": true, 00:22:47.052 "flush": true, 00:22:47.052 "reset": true, 00:22:47.052 "compare": false, 00:22:47.052 "compare_and_write": false, 00:22:47.052 "abort": true, 00:22:47.052 "nvme_admin": false, 00:22:47.052 "nvme_io": false 00:22:47.052 }, 00:22:47.052 "memory_domains": [ 00:22:47.052 { 00:22:47.052 "dma_device_id": "system", 00:22:47.052 "dma_device_type": 1 00:22:47.052 }, 00:22:47.052 { 00:22:47.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.052 "dma_device_type": 2 00:22:47.052 } 00:22:47.052 ], 00:22:47.052 "driver_specific": {} 00:22:47.052 } 00:22:47.052 ] 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.052 12:04:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:47.310 12:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:47.310 "name": "Existed_Raid", 00:22:47.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.310 "strip_size_kb": 64, 00:22:47.310 "state": "configuring", 00:22:47.310 "raid_level": "concat", 00:22:47.310 "superblock": false, 00:22:47.310 "num_base_bdevs": 4, 00:22:47.310 "num_base_bdevs_discovered": 1, 00:22:47.310 "num_base_bdevs_operational": 4, 00:22:47.310 "base_bdevs_list": [ 00:22:47.310 { 00:22:47.310 "name": "BaseBdev1", 00:22:47.310 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:47.310 "is_configured": true, 00:22:47.310 "data_offset": 0, 00:22:47.310 "data_size": 65536 00:22:47.310 }, 00:22:47.310 { 00:22:47.310 "name": "BaseBdev2", 00:22:47.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.310 "is_configured": false, 00:22:47.310 "data_offset": 0, 00:22:47.310 "data_size": 0 00:22:47.310 }, 00:22:47.310 { 00:22:47.310 "name": "BaseBdev3", 00:22:47.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.310 "is_configured": false, 00:22:47.310 "data_offset": 0, 00:22:47.310 "data_size": 0 00:22:47.310 }, 00:22:47.310 { 00:22:47.310 "name": "BaseBdev4", 00:22:47.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:47.310 "is_configured": false, 00:22:47.310 "data_offset": 0, 00:22:47.310 "data_size": 0 00:22:47.310 } 00:22:47.310 ] 00:22:47.310 }' 00:22:47.310 12:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:47.310 12:04:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:48.244 12:04:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:48.244 [2024-07-21 12:04:47.037585] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:48.244 [2024-07-21 12:04:47.037694] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:48.244 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:48.503 [2024-07-21 12:04:47.261684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:48.503 [2024-07-21 12:04:47.263962] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:48.503 [2024-07-21 12:04:47.264063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:48.503 [2024-07-21 12:04:47.264095] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:48.503 [2024-07-21 12:04:47.264124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:48.503 [2024-07-21 12:04:47.264135] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:48.503 [2024-07-21 12:04:47.264154] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.503 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.761 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:48.761 "name": "Existed_Raid", 00:22:48.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.761 "strip_size_kb": 64, 00:22:48.761 "state": "configuring", 00:22:48.761 "raid_level": "concat", 00:22:48.761 "superblock": false, 00:22:48.761 "num_base_bdevs": 4, 00:22:48.761 "num_base_bdevs_discovered": 1, 00:22:48.761 "num_base_bdevs_operational": 4, 00:22:48.761 "base_bdevs_list": [ 00:22:48.761 { 00:22:48.761 "name": "BaseBdev1", 00:22:48.761 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:48.761 "is_configured": true, 00:22:48.761 "data_offset": 0, 00:22:48.761 "data_size": 65536 00:22:48.761 }, 00:22:48.761 { 00:22:48.761 "name": "BaseBdev2", 00:22:48.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.761 "is_configured": false, 00:22:48.761 "data_offset": 0, 00:22:48.761 "data_size": 0 00:22:48.761 }, 00:22:48.761 { 00:22:48.761 "name": "BaseBdev3", 00:22:48.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.761 "is_configured": false, 00:22:48.761 "data_offset": 0, 00:22:48.761 "data_size": 0 00:22:48.761 }, 00:22:48.761 { 00:22:48.761 "name": "BaseBdev4", 00:22:48.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.761 "is_configured": false, 00:22:48.761 "data_offset": 0, 00:22:48.761 "data_size": 0 00:22:48.761 } 00:22:48.761 ] 00:22:48.761 }' 00:22:48.761 12:04:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:48.761 12:04:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:49.325 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:49.594 [2024-07-21 12:04:48.383127] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:49.594 BaseBdev2 00:22:49.594 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:22:49.594 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:22:49.594 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:49.594 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:49.594 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:49.594 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:49.594 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.851 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:50.108 [ 00:22:50.108 { 00:22:50.108 "name": "BaseBdev2", 00:22:50.108 "aliases": [ 00:22:50.108 "63d0eacf-91a1-443a-b674-649c757832a7" 00:22:50.108 ], 00:22:50.108 "product_name": "Malloc disk", 00:22:50.108 "block_size": 512, 00:22:50.108 "num_blocks": 65536, 00:22:50.108 "uuid": "63d0eacf-91a1-443a-b674-649c757832a7", 00:22:50.108 "assigned_rate_limits": { 00:22:50.108 "rw_ios_per_sec": 0, 00:22:50.108 "rw_mbytes_per_sec": 0, 00:22:50.108 "r_mbytes_per_sec": 0, 00:22:50.108 "w_mbytes_per_sec": 0 00:22:50.108 }, 00:22:50.108 "claimed": true, 00:22:50.108 "claim_type": "exclusive_write", 00:22:50.108 "zoned": false, 00:22:50.108 "supported_io_types": { 00:22:50.108 "read": true, 00:22:50.108 "write": true, 00:22:50.108 "unmap": true, 00:22:50.108 "write_zeroes": true, 00:22:50.108 "flush": true, 00:22:50.108 "reset": true, 00:22:50.108 "compare": false, 00:22:50.108 "compare_and_write": false, 00:22:50.108 "abort": true, 00:22:50.108 "nvme_admin": false, 00:22:50.108 "nvme_io": false 00:22:50.108 }, 00:22:50.108 "memory_domains": [ 00:22:50.108 { 00:22:50.108 "dma_device_id": "system", 00:22:50.108 "dma_device_type": 1 00:22:50.108 }, 00:22:50.108 { 00:22:50.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:50.108 "dma_device_type": 2 00:22:50.108 } 00:22:50.108 ], 00:22:50.108 "driver_specific": {} 00:22:50.108 } 00:22:50.108 ] 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.108 12:04:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:50.366 12:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:50.366 "name": "Existed_Raid", 00:22:50.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.366 "strip_size_kb": 64, 00:22:50.366 "state": "configuring", 00:22:50.366 "raid_level": "concat", 00:22:50.366 "superblock": false, 00:22:50.366 "num_base_bdevs": 4, 00:22:50.366 "num_base_bdevs_discovered": 2, 00:22:50.366 "num_base_bdevs_operational": 4, 00:22:50.366 "base_bdevs_list": [ 00:22:50.366 { 00:22:50.366 "name": "BaseBdev1", 00:22:50.366 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:50.366 "is_configured": true, 00:22:50.366 "data_offset": 0, 00:22:50.366 "data_size": 65536 00:22:50.366 }, 00:22:50.366 { 00:22:50.366 "name": "BaseBdev2", 00:22:50.366 "uuid": "63d0eacf-91a1-443a-b674-649c757832a7", 00:22:50.366 "is_configured": true, 00:22:50.366 "data_offset": 0, 00:22:50.366 "data_size": 65536 00:22:50.366 }, 00:22:50.366 { 00:22:50.366 "name": "BaseBdev3", 00:22:50.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.366 "is_configured": false, 00:22:50.366 "data_offset": 0, 00:22:50.366 "data_size": 0 00:22:50.366 }, 00:22:50.366 { 00:22:50.366 "name": "BaseBdev4", 00:22:50.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:50.366 "is_configured": false, 00:22:50.366 "data_offset": 0, 00:22:50.366 "data_size": 0 00:22:50.366 } 00:22:50.366 ] 00:22:50.366 }' 00:22:50.366 12:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:50.366 12:04:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:50.932 12:04:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:51.190 [2024-07-21 12:04:49.992440] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:51.190 BaseBdev3 00:22:51.190 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:22:51.190 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:22:51.190 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:51.190 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:51.190 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:51.190 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:51.190 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:51.448 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:51.707 [ 00:22:51.707 { 00:22:51.707 "name": "BaseBdev3", 00:22:51.707 "aliases": [ 00:22:51.707 "ea92deba-e1cb-42a3-b429-375a3b5b2c00" 00:22:51.707 ], 00:22:51.707 "product_name": "Malloc disk", 00:22:51.707 "block_size": 512, 00:22:51.707 "num_blocks": 65536, 00:22:51.707 "uuid": "ea92deba-e1cb-42a3-b429-375a3b5b2c00", 00:22:51.707 "assigned_rate_limits": { 00:22:51.707 "rw_ios_per_sec": 0, 00:22:51.707 "rw_mbytes_per_sec": 0, 00:22:51.707 "r_mbytes_per_sec": 0, 00:22:51.707 "w_mbytes_per_sec": 0 00:22:51.707 }, 00:22:51.707 "claimed": true, 00:22:51.707 "claim_type": "exclusive_write", 00:22:51.707 "zoned": false, 00:22:51.707 "supported_io_types": { 00:22:51.707 "read": true, 00:22:51.707 "write": true, 00:22:51.707 "unmap": true, 00:22:51.707 "write_zeroes": true, 00:22:51.707 "flush": true, 00:22:51.707 "reset": true, 00:22:51.707 "compare": false, 00:22:51.707 "compare_and_write": false, 00:22:51.707 "abort": true, 00:22:51.707 "nvme_admin": false, 00:22:51.707 "nvme_io": false 00:22:51.707 }, 00:22:51.707 "memory_domains": [ 00:22:51.707 { 00:22:51.707 "dma_device_id": "system", 00:22:51.707 "dma_device_type": 1 00:22:51.707 }, 00:22:51.707 { 00:22:51.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:51.707 "dma_device_type": 2 00:22:51.707 } 00:22:51.707 ], 00:22:51.707 "driver_specific": {} 00:22:51.707 } 00:22:51.707 ] 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.707 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.965 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:51.965 "name": "Existed_Raid", 00:22:51.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.965 "strip_size_kb": 64, 00:22:51.965 "state": "configuring", 00:22:51.965 "raid_level": "concat", 00:22:51.965 "superblock": false, 00:22:51.965 "num_base_bdevs": 4, 00:22:51.965 "num_base_bdevs_discovered": 3, 00:22:51.965 "num_base_bdevs_operational": 4, 00:22:51.965 "base_bdevs_list": [ 00:22:51.965 { 00:22:51.965 "name": "BaseBdev1", 00:22:51.965 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:51.965 "is_configured": true, 00:22:51.965 "data_offset": 0, 00:22:51.965 "data_size": 65536 00:22:51.965 }, 00:22:51.965 { 00:22:51.965 "name": "BaseBdev2", 00:22:51.965 "uuid": "63d0eacf-91a1-443a-b674-649c757832a7", 00:22:51.965 "is_configured": true, 00:22:51.965 "data_offset": 0, 00:22:51.965 "data_size": 65536 00:22:51.965 }, 00:22:51.965 { 00:22:51.965 "name": "BaseBdev3", 00:22:51.965 "uuid": "ea92deba-e1cb-42a3-b429-375a3b5b2c00", 00:22:51.965 "is_configured": true, 00:22:51.965 "data_offset": 0, 00:22:51.965 "data_size": 65536 00:22:51.965 }, 00:22:51.965 { 00:22:51.965 "name": "BaseBdev4", 00:22:51.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.965 "is_configured": false, 00:22:51.965 "data_offset": 0, 00:22:51.965 "data_size": 0 00:22:51.965 } 00:22:51.965 ] 00:22:51.965 }' 00:22:51.965 12:04:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:51.965 12:04:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:52.899 [2024-07-21 12:04:51.632427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:52.899 [2024-07-21 12:04:51.632494] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:22:52.899 [2024-07-21 12:04:51.632508] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:22:52.899 [2024-07-21 12:04:51.632650] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:22:52.899 [2024-07-21 12:04:51.633083] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:22:52.899 [2024-07-21 12:04:51.633098] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:22:52.899 [2024-07-21 12:04:51.633381] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:52.899 BaseBdev4 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:22:52.899 12:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:53.156 12:04:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:22:53.414 [ 00:22:53.414 { 00:22:53.414 "name": "BaseBdev4", 00:22:53.414 "aliases": [ 00:22:53.414 "8d68311b-9a97-4120-ab5c-b36511c5a789" 00:22:53.414 ], 00:22:53.414 "product_name": "Malloc disk", 00:22:53.414 "block_size": 512, 00:22:53.414 "num_blocks": 65536, 00:22:53.414 "uuid": "8d68311b-9a97-4120-ab5c-b36511c5a789", 00:22:53.414 "assigned_rate_limits": { 00:22:53.414 "rw_ios_per_sec": 0, 00:22:53.414 "rw_mbytes_per_sec": 0, 00:22:53.414 "r_mbytes_per_sec": 0, 00:22:53.414 "w_mbytes_per_sec": 0 00:22:53.414 }, 00:22:53.414 "claimed": true, 00:22:53.414 "claim_type": "exclusive_write", 00:22:53.414 "zoned": false, 00:22:53.414 "supported_io_types": { 00:22:53.414 "read": true, 00:22:53.414 "write": true, 00:22:53.414 "unmap": true, 00:22:53.414 "write_zeroes": true, 00:22:53.414 "flush": true, 00:22:53.414 "reset": true, 00:22:53.414 "compare": false, 00:22:53.414 "compare_and_write": false, 00:22:53.414 "abort": true, 00:22:53.414 "nvme_admin": false, 00:22:53.414 "nvme_io": false 00:22:53.414 }, 00:22:53.414 "memory_domains": [ 00:22:53.414 { 00:22:53.414 "dma_device_id": "system", 00:22:53.414 "dma_device_type": 1 00:22:53.414 }, 00:22:53.414 { 00:22:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:53.414 "dma_device_type": 2 00:22:53.414 } 00:22:53.414 ], 00:22:53.414 "driver_specific": {} 00:22:53.414 } 00:22:53.414 ] 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.414 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:53.673 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:53.673 "name": "Existed_Raid", 00:22:53.673 "uuid": "9be9a6f1-d49f-4d8c-bf79-8871e257520f", 00:22:53.673 "strip_size_kb": 64, 00:22:53.673 "state": "online", 00:22:53.673 "raid_level": "concat", 00:22:53.673 "superblock": false, 00:22:53.673 "num_base_bdevs": 4, 00:22:53.673 "num_base_bdevs_discovered": 4, 00:22:53.673 "num_base_bdevs_operational": 4, 00:22:53.673 "base_bdevs_list": [ 00:22:53.673 { 00:22:53.673 "name": "BaseBdev1", 00:22:53.673 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:53.673 "is_configured": true, 00:22:53.673 "data_offset": 0, 00:22:53.673 "data_size": 65536 00:22:53.673 }, 00:22:53.673 { 00:22:53.673 "name": "BaseBdev2", 00:22:53.673 "uuid": "63d0eacf-91a1-443a-b674-649c757832a7", 00:22:53.673 "is_configured": true, 00:22:53.673 "data_offset": 0, 00:22:53.673 "data_size": 65536 00:22:53.673 }, 00:22:53.673 { 00:22:53.673 "name": "BaseBdev3", 00:22:53.673 "uuid": "ea92deba-e1cb-42a3-b429-375a3b5b2c00", 00:22:53.673 "is_configured": true, 00:22:53.673 "data_offset": 0, 00:22:53.673 "data_size": 65536 00:22:53.673 }, 00:22:53.673 { 00:22:53.673 "name": "BaseBdev4", 00:22:53.673 "uuid": "8d68311b-9a97-4120-ab5c-b36511c5a789", 00:22:53.673 "is_configured": true, 00:22:53.673 "data_offset": 0, 00:22:53.673 "data_size": 65536 00:22:53.673 } 00:22:53.673 ] 00:22:53.673 }' 00:22:53.673 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:53.673 12:04:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:22:54.239 12:04:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:22:54.497 [2024-07-21 12:04:53.181166] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:54.497 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:22:54.497 "name": "Existed_Raid", 00:22:54.497 "aliases": [ 00:22:54.497 "9be9a6f1-d49f-4d8c-bf79-8871e257520f" 00:22:54.497 ], 00:22:54.497 "product_name": "Raid Volume", 00:22:54.497 "block_size": 512, 00:22:54.497 "num_blocks": 262144, 00:22:54.497 "uuid": "9be9a6f1-d49f-4d8c-bf79-8871e257520f", 00:22:54.497 "assigned_rate_limits": { 00:22:54.497 "rw_ios_per_sec": 0, 00:22:54.497 "rw_mbytes_per_sec": 0, 00:22:54.497 "r_mbytes_per_sec": 0, 00:22:54.497 "w_mbytes_per_sec": 0 00:22:54.497 }, 00:22:54.497 "claimed": false, 00:22:54.497 "zoned": false, 00:22:54.497 "supported_io_types": { 00:22:54.497 "read": true, 00:22:54.497 "write": true, 00:22:54.497 "unmap": true, 00:22:54.497 "write_zeroes": true, 00:22:54.497 "flush": true, 00:22:54.497 "reset": true, 00:22:54.497 "compare": false, 00:22:54.497 "compare_and_write": false, 00:22:54.497 "abort": false, 00:22:54.497 "nvme_admin": false, 00:22:54.497 "nvme_io": false 00:22:54.497 }, 00:22:54.497 "memory_domains": [ 00:22:54.497 { 00:22:54.497 "dma_device_id": "system", 00:22:54.497 "dma_device_type": 1 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.497 "dma_device_type": 2 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "dma_device_id": "system", 00:22:54.497 "dma_device_type": 1 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.497 "dma_device_type": 2 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "dma_device_id": "system", 00:22:54.497 "dma_device_type": 1 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.497 "dma_device_type": 2 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "dma_device_id": "system", 00:22:54.497 "dma_device_type": 1 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.497 "dma_device_type": 2 00:22:54.497 } 00:22:54.497 ], 00:22:54.497 "driver_specific": { 00:22:54.497 "raid": { 00:22:54.497 "uuid": "9be9a6f1-d49f-4d8c-bf79-8871e257520f", 00:22:54.497 "strip_size_kb": 64, 00:22:54.497 "state": "online", 00:22:54.497 "raid_level": "concat", 00:22:54.497 "superblock": false, 00:22:54.497 "num_base_bdevs": 4, 00:22:54.497 "num_base_bdevs_discovered": 4, 00:22:54.497 "num_base_bdevs_operational": 4, 00:22:54.497 "base_bdevs_list": [ 00:22:54.497 { 00:22:54.497 "name": "BaseBdev1", 00:22:54.497 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:54.497 "is_configured": true, 00:22:54.497 "data_offset": 0, 00:22:54.497 "data_size": 65536 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "name": "BaseBdev2", 00:22:54.497 "uuid": "63d0eacf-91a1-443a-b674-649c757832a7", 00:22:54.497 "is_configured": true, 00:22:54.497 "data_offset": 0, 00:22:54.497 "data_size": 65536 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "name": "BaseBdev3", 00:22:54.497 "uuid": "ea92deba-e1cb-42a3-b429-375a3b5b2c00", 00:22:54.497 "is_configured": true, 00:22:54.497 "data_offset": 0, 00:22:54.497 "data_size": 65536 00:22:54.497 }, 00:22:54.497 { 00:22:54.497 "name": "BaseBdev4", 00:22:54.497 "uuid": "8d68311b-9a97-4120-ab5c-b36511c5a789", 00:22:54.497 "is_configured": true, 00:22:54.497 "data_offset": 0, 00:22:54.497 "data_size": 65536 00:22:54.497 } 00:22:54.497 ] 00:22:54.497 } 00:22:54.497 } 00:22:54.497 }' 00:22:54.497 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:22:54.497 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:22:54.497 BaseBdev2 00:22:54.497 BaseBdev3 00:22:54.497 BaseBdev4' 00:22:54.497 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:54.497 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:22:54.497 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:54.754 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:54.754 "name": "BaseBdev1", 00:22:54.754 "aliases": [ 00:22:54.754 "6ff1af34-d85b-4ee3-b38f-07f03257611b" 00:22:54.754 ], 00:22:54.754 "product_name": "Malloc disk", 00:22:54.754 "block_size": 512, 00:22:54.754 "num_blocks": 65536, 00:22:54.754 "uuid": "6ff1af34-d85b-4ee3-b38f-07f03257611b", 00:22:54.754 "assigned_rate_limits": { 00:22:54.754 "rw_ios_per_sec": 0, 00:22:54.754 "rw_mbytes_per_sec": 0, 00:22:54.754 "r_mbytes_per_sec": 0, 00:22:54.754 "w_mbytes_per_sec": 0 00:22:54.754 }, 00:22:54.754 "claimed": true, 00:22:54.754 "claim_type": "exclusive_write", 00:22:54.754 "zoned": false, 00:22:54.754 "supported_io_types": { 00:22:54.754 "read": true, 00:22:54.754 "write": true, 00:22:54.754 "unmap": true, 00:22:54.754 "write_zeroes": true, 00:22:54.754 "flush": true, 00:22:54.754 "reset": true, 00:22:54.754 "compare": false, 00:22:54.754 "compare_and_write": false, 00:22:54.754 "abort": true, 00:22:54.754 "nvme_admin": false, 00:22:54.754 "nvme_io": false 00:22:54.754 }, 00:22:54.754 "memory_domains": [ 00:22:54.754 { 00:22:54.754 "dma_device_id": "system", 00:22:54.754 "dma_device_type": 1 00:22:54.754 }, 00:22:54.754 { 00:22:54.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:54.754 "dma_device_type": 2 00:22:54.754 } 00:22:54.754 ], 00:22:54.754 "driver_specific": {} 00:22:54.754 }' 00:22:54.754 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.754 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:54.754 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.011 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.268 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.268 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.268 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:22:55.268 12:04:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:55.268 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:55.268 "name": "BaseBdev2", 00:22:55.268 "aliases": [ 00:22:55.268 "63d0eacf-91a1-443a-b674-649c757832a7" 00:22:55.268 ], 00:22:55.268 "product_name": "Malloc disk", 00:22:55.268 "block_size": 512, 00:22:55.268 "num_blocks": 65536, 00:22:55.268 "uuid": "63d0eacf-91a1-443a-b674-649c757832a7", 00:22:55.268 "assigned_rate_limits": { 00:22:55.268 "rw_ios_per_sec": 0, 00:22:55.268 "rw_mbytes_per_sec": 0, 00:22:55.268 "r_mbytes_per_sec": 0, 00:22:55.268 "w_mbytes_per_sec": 0 00:22:55.268 }, 00:22:55.268 "claimed": true, 00:22:55.268 "claim_type": "exclusive_write", 00:22:55.268 "zoned": false, 00:22:55.268 "supported_io_types": { 00:22:55.268 "read": true, 00:22:55.268 "write": true, 00:22:55.268 "unmap": true, 00:22:55.269 "write_zeroes": true, 00:22:55.269 "flush": true, 00:22:55.269 "reset": true, 00:22:55.269 "compare": false, 00:22:55.269 "compare_and_write": false, 00:22:55.269 "abort": true, 00:22:55.269 "nvme_admin": false, 00:22:55.269 "nvme_io": false 00:22:55.269 }, 00:22:55.269 "memory_domains": [ 00:22:55.269 { 00:22:55.269 "dma_device_id": "system", 00:22:55.269 "dma_device_type": 1 00:22:55.269 }, 00:22:55.269 { 00:22:55.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:55.269 "dma_device_type": 2 00:22:55.269 } 00:22:55.269 ], 00:22:55.269 "driver_specific": {} 00:22:55.269 }' 00:22:55.269 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.526 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:55.526 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:55.526 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.526 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:55.526 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:55.526 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.526 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:55.783 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:55.783 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.783 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:55.783 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:55.783 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:55.783 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:22:55.783 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:56.039 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:56.039 "name": "BaseBdev3", 00:22:56.039 "aliases": [ 00:22:56.039 "ea92deba-e1cb-42a3-b429-375a3b5b2c00" 00:22:56.039 ], 00:22:56.039 "product_name": "Malloc disk", 00:22:56.039 "block_size": 512, 00:22:56.039 "num_blocks": 65536, 00:22:56.039 "uuid": "ea92deba-e1cb-42a3-b429-375a3b5b2c00", 00:22:56.039 "assigned_rate_limits": { 00:22:56.039 "rw_ios_per_sec": 0, 00:22:56.039 "rw_mbytes_per_sec": 0, 00:22:56.039 "r_mbytes_per_sec": 0, 00:22:56.039 "w_mbytes_per_sec": 0 00:22:56.039 }, 00:22:56.039 "claimed": true, 00:22:56.039 "claim_type": "exclusive_write", 00:22:56.039 "zoned": false, 00:22:56.039 "supported_io_types": { 00:22:56.039 "read": true, 00:22:56.039 "write": true, 00:22:56.039 "unmap": true, 00:22:56.039 "write_zeroes": true, 00:22:56.039 "flush": true, 00:22:56.039 "reset": true, 00:22:56.039 "compare": false, 00:22:56.039 "compare_and_write": false, 00:22:56.039 "abort": true, 00:22:56.040 "nvme_admin": false, 00:22:56.040 "nvme_io": false 00:22:56.040 }, 00:22:56.040 "memory_domains": [ 00:22:56.040 { 00:22:56.040 "dma_device_id": "system", 00:22:56.040 "dma_device_type": 1 00:22:56.040 }, 00:22:56.040 { 00:22:56.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.040 "dma_device_type": 2 00:22:56.040 } 00:22:56.040 ], 00:22:56.040 "driver_specific": {} 00:22:56.040 }' 00:22:56.040 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.040 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.297 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:56.297 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.297 12:04:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.297 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:56.297 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.297 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:56.297 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:56.297 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.554 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:56.554 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:56.554 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:22:56.554 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:22:56.554 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:22:56.812 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:22:56.812 "name": "BaseBdev4", 00:22:56.812 "aliases": [ 00:22:56.812 "8d68311b-9a97-4120-ab5c-b36511c5a789" 00:22:56.812 ], 00:22:56.812 "product_name": "Malloc disk", 00:22:56.812 "block_size": 512, 00:22:56.812 "num_blocks": 65536, 00:22:56.812 "uuid": "8d68311b-9a97-4120-ab5c-b36511c5a789", 00:22:56.812 "assigned_rate_limits": { 00:22:56.812 "rw_ios_per_sec": 0, 00:22:56.812 "rw_mbytes_per_sec": 0, 00:22:56.812 "r_mbytes_per_sec": 0, 00:22:56.812 "w_mbytes_per_sec": 0 00:22:56.812 }, 00:22:56.812 "claimed": true, 00:22:56.812 "claim_type": "exclusive_write", 00:22:56.812 "zoned": false, 00:22:56.812 "supported_io_types": { 00:22:56.812 "read": true, 00:22:56.812 "write": true, 00:22:56.812 "unmap": true, 00:22:56.812 "write_zeroes": true, 00:22:56.812 "flush": true, 00:22:56.812 "reset": true, 00:22:56.812 "compare": false, 00:22:56.812 "compare_and_write": false, 00:22:56.812 "abort": true, 00:22:56.812 "nvme_admin": false, 00:22:56.812 "nvme_io": false 00:22:56.812 }, 00:22:56.812 "memory_domains": [ 00:22:56.812 { 00:22:56.812 "dma_device_id": "system", 00:22:56.812 "dma_device_type": 1 00:22:56.812 }, 00:22:56.812 { 00:22:56.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:56.812 "dma_device_type": 2 00:22:56.812 } 00:22:56.812 ], 00:22:56.812 "driver_specific": {} 00:22:56.812 }' 00:22:56.812 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.812 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:22:56.812 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:22:56.812 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:56.812 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:22:57.070 12:04:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:57.328 [2024-07-21 12:04:56.149635] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:57.328 [2024-07-21 12:04:56.149685] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:57.328 [2024-07-21 12:04:56.149797] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.328 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.892 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:22:57.892 "name": "Existed_Raid", 00:22:57.892 "uuid": "9be9a6f1-d49f-4d8c-bf79-8871e257520f", 00:22:57.892 "strip_size_kb": 64, 00:22:57.892 "state": "offline", 00:22:57.892 "raid_level": "concat", 00:22:57.892 "superblock": false, 00:22:57.892 "num_base_bdevs": 4, 00:22:57.893 "num_base_bdevs_discovered": 3, 00:22:57.893 "num_base_bdevs_operational": 3, 00:22:57.893 "base_bdevs_list": [ 00:22:57.893 { 00:22:57.893 "name": null, 00:22:57.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.893 "is_configured": false, 00:22:57.893 "data_offset": 0, 00:22:57.893 "data_size": 65536 00:22:57.893 }, 00:22:57.893 { 00:22:57.893 "name": "BaseBdev2", 00:22:57.893 "uuid": "63d0eacf-91a1-443a-b674-649c757832a7", 00:22:57.893 "is_configured": true, 00:22:57.893 "data_offset": 0, 00:22:57.893 "data_size": 65536 00:22:57.893 }, 00:22:57.893 { 00:22:57.893 "name": "BaseBdev3", 00:22:57.893 "uuid": "ea92deba-e1cb-42a3-b429-375a3b5b2c00", 00:22:57.893 "is_configured": true, 00:22:57.893 "data_offset": 0, 00:22:57.893 "data_size": 65536 00:22:57.893 }, 00:22:57.893 { 00:22:57.893 "name": "BaseBdev4", 00:22:57.893 "uuid": "8d68311b-9a97-4120-ab5c-b36511c5a789", 00:22:57.893 "is_configured": true, 00:22:57.893 "data_offset": 0, 00:22:57.893 "data_size": 65536 00:22:57.893 } 00:22:57.893 ] 00:22:57.893 }' 00:22:57.893 12:04:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:22:57.893 12:04:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:22:58.507 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:22:58.507 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:58.507 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.507 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:58.507 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:58.507 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:58.507 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:58.764 [2024-07-21 12:04:57.615171] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:59.022 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:59.022 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:59.022 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.022 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:59.280 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:59.280 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:59.280 12:04:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:59.280 [2024-07-21 12:04:58.131694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:59.538 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:22:59.538 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:22:59.538 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.538 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:22:59.795 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:22:59.795 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:59.795 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:22:59.795 [2024-07-21 12:04:58.643078] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:22:59.795 [2024-07-21 12:04:58.643175] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:23:00.052 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:00.052 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:00.052 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.052 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:00.310 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:00.310 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:00.310 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:00.310 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:00.310 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:00.310 12:04:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:00.310 BaseBdev2 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.567 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:00.826 [ 00:23:00.826 { 00:23:00.826 "name": "BaseBdev2", 00:23:00.826 "aliases": [ 00:23:00.826 "396592c7-d725-449a-92df-d228b7711861" 00:23:00.826 ], 00:23:00.826 "product_name": "Malloc disk", 00:23:00.826 "block_size": 512, 00:23:00.826 "num_blocks": 65536, 00:23:00.826 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:00.826 "assigned_rate_limits": { 00:23:00.826 "rw_ios_per_sec": 0, 00:23:00.826 "rw_mbytes_per_sec": 0, 00:23:00.826 "r_mbytes_per_sec": 0, 00:23:00.826 "w_mbytes_per_sec": 0 00:23:00.826 }, 00:23:00.826 "claimed": false, 00:23:00.826 "zoned": false, 00:23:00.826 "supported_io_types": { 00:23:00.826 "read": true, 00:23:00.826 "write": true, 00:23:00.826 "unmap": true, 00:23:00.826 "write_zeroes": true, 00:23:00.826 "flush": true, 00:23:00.826 "reset": true, 00:23:00.826 "compare": false, 00:23:00.826 "compare_and_write": false, 00:23:00.826 "abort": true, 00:23:00.826 "nvme_admin": false, 00:23:00.826 "nvme_io": false 00:23:00.826 }, 00:23:00.826 "memory_domains": [ 00:23:00.826 { 00:23:00.826 "dma_device_id": "system", 00:23:00.826 "dma_device_type": 1 00:23:00.826 }, 00:23:00.826 { 00:23:00.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.826 "dma_device_type": 2 00:23:00.826 } 00:23:00.826 ], 00:23:00.826 "driver_specific": {} 00:23:00.826 } 00:23:00.826 ] 00:23:00.826 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:00.826 12:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:00.826 12:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:00.826 12:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:01.083 BaseBdev3 00:23:01.083 12:04:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:01.083 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:01.083 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:01.083 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:01.083 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:01.083 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:01.083 12:04:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.340 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:01.598 [ 00:23:01.598 { 00:23:01.598 "name": "BaseBdev3", 00:23:01.598 "aliases": [ 00:23:01.598 "2c65695b-8741-4371-b9d5-82ae3fd1992b" 00:23:01.598 ], 00:23:01.598 "product_name": "Malloc disk", 00:23:01.598 "block_size": 512, 00:23:01.598 "num_blocks": 65536, 00:23:01.598 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:01.598 "assigned_rate_limits": { 00:23:01.598 "rw_ios_per_sec": 0, 00:23:01.598 "rw_mbytes_per_sec": 0, 00:23:01.598 "r_mbytes_per_sec": 0, 00:23:01.598 "w_mbytes_per_sec": 0 00:23:01.598 }, 00:23:01.598 "claimed": false, 00:23:01.598 "zoned": false, 00:23:01.598 "supported_io_types": { 00:23:01.598 "read": true, 00:23:01.598 "write": true, 00:23:01.598 "unmap": true, 00:23:01.598 "write_zeroes": true, 00:23:01.598 "flush": true, 00:23:01.598 "reset": true, 00:23:01.598 "compare": false, 00:23:01.598 "compare_and_write": false, 00:23:01.598 "abort": true, 00:23:01.598 "nvme_admin": false, 00:23:01.598 "nvme_io": false 00:23:01.598 }, 00:23:01.598 "memory_domains": [ 00:23:01.598 { 00:23:01.598 "dma_device_id": "system", 00:23:01.598 "dma_device_type": 1 00:23:01.598 }, 00:23:01.598 { 00:23:01.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.598 "dma_device_type": 2 00:23:01.598 } 00:23:01.598 ], 00:23:01.598 "driver_specific": {} 00:23:01.598 } 00:23:01.598 ] 00:23:01.598 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:01.598 12:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:01.598 12:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:01.598 12:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:01.855 BaseBdev4 00:23:01.855 12:05:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:01.855 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:01.855 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:01.855 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:01.855 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:01.855 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:01.855 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.113 12:05:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:02.370 [ 00:23:02.370 { 00:23:02.370 "name": "BaseBdev4", 00:23:02.370 "aliases": [ 00:23:02.370 "ade65953-7554-4a4b-b3d8-b0e85f2117ba" 00:23:02.370 ], 00:23:02.370 "product_name": "Malloc disk", 00:23:02.370 "block_size": 512, 00:23:02.370 "num_blocks": 65536, 00:23:02.370 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:02.370 "assigned_rate_limits": { 00:23:02.370 "rw_ios_per_sec": 0, 00:23:02.370 "rw_mbytes_per_sec": 0, 00:23:02.370 "r_mbytes_per_sec": 0, 00:23:02.370 "w_mbytes_per_sec": 0 00:23:02.370 }, 00:23:02.370 "claimed": false, 00:23:02.370 "zoned": false, 00:23:02.370 "supported_io_types": { 00:23:02.370 "read": true, 00:23:02.370 "write": true, 00:23:02.370 "unmap": true, 00:23:02.370 "write_zeroes": true, 00:23:02.370 "flush": true, 00:23:02.370 "reset": true, 00:23:02.370 "compare": false, 00:23:02.370 "compare_and_write": false, 00:23:02.370 "abort": true, 00:23:02.370 "nvme_admin": false, 00:23:02.370 "nvme_io": false 00:23:02.370 }, 00:23:02.370 "memory_domains": [ 00:23:02.370 { 00:23:02.370 "dma_device_id": "system", 00:23:02.370 "dma_device_type": 1 00:23:02.370 }, 00:23:02.370 { 00:23:02.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.370 "dma_device_type": 2 00:23:02.370 } 00:23:02.370 ], 00:23:02.370 "driver_specific": {} 00:23:02.370 } 00:23:02.370 ] 00:23:02.370 12:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:02.370 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:02.370 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:02.370 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:02.629 [2024-07-21 12:05:01.363378] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:02.629 [2024-07-21 12:05:01.363477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:02.629 [2024-07-21 12:05:01.363510] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:02.629 [2024-07-21 12:05:01.365688] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:02.629 [2024-07-21 12:05:01.365774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.629 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.886 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:02.886 "name": "Existed_Raid", 00:23:02.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.886 "strip_size_kb": 64, 00:23:02.886 "state": "configuring", 00:23:02.886 "raid_level": "concat", 00:23:02.886 "superblock": false, 00:23:02.886 "num_base_bdevs": 4, 00:23:02.886 "num_base_bdevs_discovered": 3, 00:23:02.886 "num_base_bdevs_operational": 4, 00:23:02.886 "base_bdevs_list": [ 00:23:02.886 { 00:23:02.886 "name": "BaseBdev1", 00:23:02.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:02.886 "is_configured": false, 00:23:02.886 "data_offset": 0, 00:23:02.886 "data_size": 0 00:23:02.886 }, 00:23:02.886 { 00:23:02.886 "name": "BaseBdev2", 00:23:02.886 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:02.886 "is_configured": true, 00:23:02.886 "data_offset": 0, 00:23:02.886 "data_size": 65536 00:23:02.886 }, 00:23:02.886 { 00:23:02.886 "name": "BaseBdev3", 00:23:02.886 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:02.886 "is_configured": true, 00:23:02.886 "data_offset": 0, 00:23:02.886 "data_size": 65536 00:23:02.886 }, 00:23:02.886 { 00:23:02.886 "name": "BaseBdev4", 00:23:02.886 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:02.886 "is_configured": true, 00:23:02.886 "data_offset": 0, 00:23:02.886 "data_size": 65536 00:23:02.886 } 00:23:02.886 ] 00:23:02.886 }' 00:23:02.886 12:05:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:02.886 12:05:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:03.453 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:03.711 [2024-07-21 12:05:02.503639] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.711 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.968 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:03.968 "name": "Existed_Raid", 00:23:03.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.968 "strip_size_kb": 64, 00:23:03.968 "state": "configuring", 00:23:03.968 "raid_level": "concat", 00:23:03.968 "superblock": false, 00:23:03.968 "num_base_bdevs": 4, 00:23:03.968 "num_base_bdevs_discovered": 2, 00:23:03.969 "num_base_bdevs_operational": 4, 00:23:03.969 "base_bdevs_list": [ 00:23:03.969 { 00:23:03.969 "name": "BaseBdev1", 00:23:03.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.969 "is_configured": false, 00:23:03.969 "data_offset": 0, 00:23:03.969 "data_size": 0 00:23:03.969 }, 00:23:03.969 { 00:23:03.969 "name": null, 00:23:03.969 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:03.969 "is_configured": false, 00:23:03.969 "data_offset": 0, 00:23:03.969 "data_size": 65536 00:23:03.969 }, 00:23:03.969 { 00:23:03.969 "name": "BaseBdev3", 00:23:03.969 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:03.969 "is_configured": true, 00:23:03.969 "data_offset": 0, 00:23:03.969 "data_size": 65536 00:23:03.969 }, 00:23:03.969 { 00:23:03.969 "name": "BaseBdev4", 00:23:03.969 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:03.969 "is_configured": true, 00:23:03.969 "data_offset": 0, 00:23:03.969 "data_size": 65536 00:23:03.969 } 00:23:03.969 ] 00:23:03.969 }' 00:23:03.969 12:05:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:03.969 12:05:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:04.902 12:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.902 12:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:04.902 12:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:04.902 12:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:05.159 [2024-07-21 12:05:03.948680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:05.159 BaseBdev1 00:23:05.159 12:05:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:05.159 12:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:05.159 12:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:05.159 12:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:05.159 12:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:05.159 12:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:05.159 12:05:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:05.417 12:05:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:05.675 [ 00:23:05.675 { 00:23:05.675 "name": "BaseBdev1", 00:23:05.675 "aliases": [ 00:23:05.675 "30009987-eb31-4ea2-b0da-f25e05352fa1" 00:23:05.675 ], 00:23:05.675 "product_name": "Malloc disk", 00:23:05.675 "block_size": 512, 00:23:05.675 "num_blocks": 65536, 00:23:05.675 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:05.675 "assigned_rate_limits": { 00:23:05.675 "rw_ios_per_sec": 0, 00:23:05.675 "rw_mbytes_per_sec": 0, 00:23:05.675 "r_mbytes_per_sec": 0, 00:23:05.675 "w_mbytes_per_sec": 0 00:23:05.675 }, 00:23:05.675 "claimed": true, 00:23:05.675 "claim_type": "exclusive_write", 00:23:05.675 "zoned": false, 00:23:05.675 "supported_io_types": { 00:23:05.675 "read": true, 00:23:05.675 "write": true, 00:23:05.675 "unmap": true, 00:23:05.675 "write_zeroes": true, 00:23:05.675 "flush": true, 00:23:05.675 "reset": true, 00:23:05.675 "compare": false, 00:23:05.675 "compare_and_write": false, 00:23:05.675 "abort": true, 00:23:05.675 "nvme_admin": false, 00:23:05.675 "nvme_io": false 00:23:05.675 }, 00:23:05.675 "memory_domains": [ 00:23:05.675 { 00:23:05.675 "dma_device_id": "system", 00:23:05.675 "dma_device_type": 1 00:23:05.675 }, 00:23:05.675 { 00:23:05.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:05.675 "dma_device_type": 2 00:23:05.675 } 00:23:05.675 ], 00:23:05.675 "driver_specific": {} 00:23:05.675 } 00:23:05.675 ] 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.675 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:06.240 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:06.240 "name": "Existed_Raid", 00:23:06.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:06.240 "strip_size_kb": 64, 00:23:06.240 "state": "configuring", 00:23:06.240 "raid_level": "concat", 00:23:06.240 "superblock": false, 00:23:06.240 "num_base_bdevs": 4, 00:23:06.240 "num_base_bdevs_discovered": 3, 00:23:06.240 "num_base_bdevs_operational": 4, 00:23:06.240 "base_bdevs_list": [ 00:23:06.240 { 00:23:06.240 "name": "BaseBdev1", 00:23:06.240 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:06.240 "is_configured": true, 00:23:06.240 "data_offset": 0, 00:23:06.240 "data_size": 65536 00:23:06.240 }, 00:23:06.240 { 00:23:06.240 "name": null, 00:23:06.240 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:06.240 "is_configured": false, 00:23:06.240 "data_offset": 0, 00:23:06.240 "data_size": 65536 00:23:06.240 }, 00:23:06.240 { 00:23:06.240 "name": "BaseBdev3", 00:23:06.240 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:06.240 "is_configured": true, 00:23:06.240 "data_offset": 0, 00:23:06.240 "data_size": 65536 00:23:06.240 }, 00:23:06.240 { 00:23:06.240 "name": "BaseBdev4", 00:23:06.240 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:06.240 "is_configured": true, 00:23:06.240 "data_offset": 0, 00:23:06.240 "data_size": 65536 00:23:06.240 } 00:23:06.240 ] 00:23:06.240 }' 00:23:06.240 12:05:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:06.240 12:05:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:06.805 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:06.805 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:07.062 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:07.062 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:07.320 [2024-07-21 12:05:05.965208] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.321 12:05:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.578 12:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:07.578 "name": "Existed_Raid", 00:23:07.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.578 "strip_size_kb": 64, 00:23:07.578 "state": "configuring", 00:23:07.578 "raid_level": "concat", 00:23:07.578 "superblock": false, 00:23:07.578 "num_base_bdevs": 4, 00:23:07.578 "num_base_bdevs_discovered": 2, 00:23:07.578 "num_base_bdevs_operational": 4, 00:23:07.578 "base_bdevs_list": [ 00:23:07.578 { 00:23:07.578 "name": "BaseBdev1", 00:23:07.578 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:07.578 "is_configured": true, 00:23:07.578 "data_offset": 0, 00:23:07.578 "data_size": 65536 00:23:07.578 }, 00:23:07.578 { 00:23:07.578 "name": null, 00:23:07.578 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:07.578 "is_configured": false, 00:23:07.578 "data_offset": 0, 00:23:07.578 "data_size": 65536 00:23:07.578 }, 00:23:07.578 { 00:23:07.578 "name": null, 00:23:07.578 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:07.578 "is_configured": false, 00:23:07.578 "data_offset": 0, 00:23:07.578 "data_size": 65536 00:23:07.578 }, 00:23:07.578 { 00:23:07.578 "name": "BaseBdev4", 00:23:07.578 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:07.578 "is_configured": true, 00:23:07.578 "data_offset": 0, 00:23:07.578 "data_size": 65536 00:23:07.578 } 00:23:07.578 ] 00:23:07.578 }' 00:23:07.578 12:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:07.578 12:05:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:08.143 12:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.143 12:05:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:08.399 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:08.399 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:08.657 [2024-07-21 12:05:07.371385] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:08.657 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:08.915 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:08.915 "name": "Existed_Raid", 00:23:08.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:08.915 "strip_size_kb": 64, 00:23:08.915 "state": "configuring", 00:23:08.915 "raid_level": "concat", 00:23:08.915 "superblock": false, 00:23:08.915 "num_base_bdevs": 4, 00:23:08.915 "num_base_bdevs_discovered": 3, 00:23:08.915 "num_base_bdevs_operational": 4, 00:23:08.915 "base_bdevs_list": [ 00:23:08.915 { 00:23:08.915 "name": "BaseBdev1", 00:23:08.915 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:08.915 "is_configured": true, 00:23:08.915 "data_offset": 0, 00:23:08.915 "data_size": 65536 00:23:08.915 }, 00:23:08.915 { 00:23:08.915 "name": null, 00:23:08.915 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:08.915 "is_configured": false, 00:23:08.915 "data_offset": 0, 00:23:08.915 "data_size": 65536 00:23:08.915 }, 00:23:08.915 { 00:23:08.915 "name": "BaseBdev3", 00:23:08.915 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:08.915 "is_configured": true, 00:23:08.915 "data_offset": 0, 00:23:08.915 "data_size": 65536 00:23:08.915 }, 00:23:08.915 { 00:23:08.915 "name": "BaseBdev4", 00:23:08.915 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:08.915 "is_configured": true, 00:23:08.915 "data_offset": 0, 00:23:08.915 "data_size": 65536 00:23:08.915 } 00:23:08.915 ] 00:23:08.915 }' 00:23:08.915 12:05:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:08.915 12:05:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:09.481 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.481 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:09.740 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:09.740 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:09.998 [2024-07-21 12:05:08.759686] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.998 12:05:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.256 12:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:10.256 "name": "Existed_Raid", 00:23:10.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:10.256 "strip_size_kb": 64, 00:23:10.256 "state": "configuring", 00:23:10.256 "raid_level": "concat", 00:23:10.256 "superblock": false, 00:23:10.256 "num_base_bdevs": 4, 00:23:10.256 "num_base_bdevs_discovered": 2, 00:23:10.256 "num_base_bdevs_operational": 4, 00:23:10.256 "base_bdevs_list": [ 00:23:10.256 { 00:23:10.256 "name": null, 00:23:10.256 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:10.256 "is_configured": false, 00:23:10.256 "data_offset": 0, 00:23:10.256 "data_size": 65536 00:23:10.256 }, 00:23:10.256 { 00:23:10.256 "name": null, 00:23:10.256 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:10.256 "is_configured": false, 00:23:10.256 "data_offset": 0, 00:23:10.256 "data_size": 65536 00:23:10.256 }, 00:23:10.256 { 00:23:10.256 "name": "BaseBdev3", 00:23:10.256 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:10.256 "is_configured": true, 00:23:10.256 "data_offset": 0, 00:23:10.256 "data_size": 65536 00:23:10.256 }, 00:23:10.256 { 00:23:10.256 "name": "BaseBdev4", 00:23:10.256 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:10.256 "is_configured": true, 00:23:10.256 "data_offset": 0, 00:23:10.256 "data_size": 65536 00:23:10.256 } 00:23:10.256 ] 00:23:10.256 }' 00:23:10.256 12:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:10.256 12:05:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:10.822 12:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.822 12:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:11.080 12:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:11.080 12:05:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:11.338 [2024-07-21 12:05:10.131260] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.338 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.595 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:11.595 "name": "Existed_Raid", 00:23:11.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.595 "strip_size_kb": 64, 00:23:11.595 "state": "configuring", 00:23:11.595 "raid_level": "concat", 00:23:11.595 "superblock": false, 00:23:11.595 "num_base_bdevs": 4, 00:23:11.595 "num_base_bdevs_discovered": 3, 00:23:11.595 "num_base_bdevs_operational": 4, 00:23:11.595 "base_bdevs_list": [ 00:23:11.595 { 00:23:11.595 "name": null, 00:23:11.595 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:11.595 "is_configured": false, 00:23:11.595 "data_offset": 0, 00:23:11.595 "data_size": 65536 00:23:11.595 }, 00:23:11.595 { 00:23:11.595 "name": "BaseBdev2", 00:23:11.595 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:11.595 "is_configured": true, 00:23:11.595 "data_offset": 0, 00:23:11.595 "data_size": 65536 00:23:11.595 }, 00:23:11.595 { 00:23:11.595 "name": "BaseBdev3", 00:23:11.595 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:11.595 "is_configured": true, 00:23:11.595 "data_offset": 0, 00:23:11.595 "data_size": 65536 00:23:11.595 }, 00:23:11.595 { 00:23:11.595 "name": "BaseBdev4", 00:23:11.595 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:11.595 "is_configured": true, 00:23:11.595 "data_offset": 0, 00:23:11.595 "data_size": 65536 00:23:11.595 } 00:23:11.595 ] 00:23:11.595 }' 00:23:11.595 12:05:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:11.595 12:05:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:12.238 12:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.238 12:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:12.510 12:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:12.510 12:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:12.510 12:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.767 12:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 30009987-eb31-4ea2-b0da-f25e05352fa1 00:23:13.025 [2024-07-21 12:05:11.820529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:13.025 [2024-07-21 12:05:11.820609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:13.025 [2024-07-21 12:05:11.820620] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:23:13.025 [2024-07-21 12:05:11.820718] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:13.025 [2024-07-21 12:05:11.821095] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:13.025 [2024-07-21 12:05:11.821122] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:23:13.025 [2024-07-21 12:05:11.821344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.025 NewBaseBdev 00:23:13.025 12:05:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:13.025 12:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:23:13.025 12:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:13.025 12:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:23:13.025 12:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:13.025 12:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:13.025 12:05:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:13.282 12:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:13.539 [ 00:23:13.539 { 00:23:13.539 "name": "NewBaseBdev", 00:23:13.539 "aliases": [ 00:23:13.539 "30009987-eb31-4ea2-b0da-f25e05352fa1" 00:23:13.539 ], 00:23:13.539 "product_name": "Malloc disk", 00:23:13.539 "block_size": 512, 00:23:13.539 "num_blocks": 65536, 00:23:13.539 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:13.539 "assigned_rate_limits": { 00:23:13.539 "rw_ios_per_sec": 0, 00:23:13.539 "rw_mbytes_per_sec": 0, 00:23:13.539 "r_mbytes_per_sec": 0, 00:23:13.539 "w_mbytes_per_sec": 0 00:23:13.539 }, 00:23:13.539 "claimed": true, 00:23:13.539 "claim_type": "exclusive_write", 00:23:13.539 "zoned": false, 00:23:13.539 "supported_io_types": { 00:23:13.539 "read": true, 00:23:13.539 "write": true, 00:23:13.539 "unmap": true, 00:23:13.539 "write_zeroes": true, 00:23:13.539 "flush": true, 00:23:13.539 "reset": true, 00:23:13.539 "compare": false, 00:23:13.539 "compare_and_write": false, 00:23:13.539 "abort": true, 00:23:13.539 "nvme_admin": false, 00:23:13.539 "nvme_io": false 00:23:13.539 }, 00:23:13.539 "memory_domains": [ 00:23:13.539 { 00:23:13.539 "dma_device_id": "system", 00:23:13.539 "dma_device_type": 1 00:23:13.539 }, 00:23:13.539 { 00:23:13.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.539 "dma_device_type": 2 00:23:13.539 } 00:23:13.539 ], 00:23:13.539 "driver_specific": {} 00:23:13.539 } 00:23:13.539 ] 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.539 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.798 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:13.798 "name": "Existed_Raid", 00:23:13.798 "uuid": "ae908601-0c8b-4ad2-bd19-328b28a1aba2", 00:23:13.798 "strip_size_kb": 64, 00:23:13.798 "state": "online", 00:23:13.798 "raid_level": "concat", 00:23:13.798 "superblock": false, 00:23:13.798 "num_base_bdevs": 4, 00:23:13.798 "num_base_bdevs_discovered": 4, 00:23:13.798 "num_base_bdevs_operational": 4, 00:23:13.798 "base_bdevs_list": [ 00:23:13.798 { 00:23:13.798 "name": "NewBaseBdev", 00:23:13.798 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:13.798 "is_configured": true, 00:23:13.798 "data_offset": 0, 00:23:13.798 "data_size": 65536 00:23:13.798 }, 00:23:13.798 { 00:23:13.798 "name": "BaseBdev2", 00:23:13.798 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:13.798 "is_configured": true, 00:23:13.798 "data_offset": 0, 00:23:13.798 "data_size": 65536 00:23:13.798 }, 00:23:13.798 { 00:23:13.798 "name": "BaseBdev3", 00:23:13.798 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:13.798 "is_configured": true, 00:23:13.798 "data_offset": 0, 00:23:13.798 "data_size": 65536 00:23:13.798 }, 00:23:13.798 { 00:23:13.798 "name": "BaseBdev4", 00:23:13.798 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:13.798 "is_configured": true, 00:23:13.798 "data_offset": 0, 00:23:13.798 "data_size": 65536 00:23:13.798 } 00:23:13.798 ] 00:23:13.798 }' 00:23:13.798 12:05:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:13.798 12:05:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:14.363 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:14.620 [2024-07-21 12:05:13.353253] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:14.621 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:14.621 "name": "Existed_Raid", 00:23:14.621 "aliases": [ 00:23:14.621 "ae908601-0c8b-4ad2-bd19-328b28a1aba2" 00:23:14.621 ], 00:23:14.621 "product_name": "Raid Volume", 00:23:14.621 "block_size": 512, 00:23:14.621 "num_blocks": 262144, 00:23:14.621 "uuid": "ae908601-0c8b-4ad2-bd19-328b28a1aba2", 00:23:14.621 "assigned_rate_limits": { 00:23:14.621 "rw_ios_per_sec": 0, 00:23:14.621 "rw_mbytes_per_sec": 0, 00:23:14.621 "r_mbytes_per_sec": 0, 00:23:14.621 "w_mbytes_per_sec": 0 00:23:14.621 }, 00:23:14.621 "claimed": false, 00:23:14.621 "zoned": false, 00:23:14.621 "supported_io_types": { 00:23:14.621 "read": true, 00:23:14.621 "write": true, 00:23:14.621 "unmap": true, 00:23:14.621 "write_zeroes": true, 00:23:14.621 "flush": true, 00:23:14.621 "reset": true, 00:23:14.621 "compare": false, 00:23:14.621 "compare_and_write": false, 00:23:14.621 "abort": false, 00:23:14.621 "nvme_admin": false, 00:23:14.621 "nvme_io": false 00:23:14.621 }, 00:23:14.621 "memory_domains": [ 00:23:14.621 { 00:23:14.621 "dma_device_id": "system", 00:23:14.621 "dma_device_type": 1 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.621 "dma_device_type": 2 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "dma_device_id": "system", 00:23:14.621 "dma_device_type": 1 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.621 "dma_device_type": 2 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "dma_device_id": "system", 00:23:14.621 "dma_device_type": 1 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.621 "dma_device_type": 2 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "dma_device_id": "system", 00:23:14.621 "dma_device_type": 1 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.621 "dma_device_type": 2 00:23:14.621 } 00:23:14.621 ], 00:23:14.621 "driver_specific": { 00:23:14.621 "raid": { 00:23:14.621 "uuid": "ae908601-0c8b-4ad2-bd19-328b28a1aba2", 00:23:14.621 "strip_size_kb": 64, 00:23:14.621 "state": "online", 00:23:14.621 "raid_level": "concat", 00:23:14.621 "superblock": false, 00:23:14.621 "num_base_bdevs": 4, 00:23:14.621 "num_base_bdevs_discovered": 4, 00:23:14.621 "num_base_bdevs_operational": 4, 00:23:14.621 "base_bdevs_list": [ 00:23:14.621 { 00:23:14.621 "name": "NewBaseBdev", 00:23:14.621 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:14.621 "is_configured": true, 00:23:14.621 "data_offset": 0, 00:23:14.621 "data_size": 65536 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "name": "BaseBdev2", 00:23:14.621 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:14.621 "is_configured": true, 00:23:14.621 "data_offset": 0, 00:23:14.621 "data_size": 65536 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "name": "BaseBdev3", 00:23:14.621 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:14.621 "is_configured": true, 00:23:14.621 "data_offset": 0, 00:23:14.621 "data_size": 65536 00:23:14.621 }, 00:23:14.621 { 00:23:14.621 "name": "BaseBdev4", 00:23:14.621 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:14.621 "is_configured": true, 00:23:14.621 "data_offset": 0, 00:23:14.621 "data_size": 65536 00:23:14.621 } 00:23:14.621 ] 00:23:14.621 } 00:23:14.621 } 00:23:14.621 }' 00:23:14.621 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:14.621 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:14.621 BaseBdev2 00:23:14.621 BaseBdev3 00:23:14.621 BaseBdev4' 00:23:14.621 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:14.621 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:14.621 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:14.878 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:14.878 "name": "NewBaseBdev", 00:23:14.878 "aliases": [ 00:23:14.878 "30009987-eb31-4ea2-b0da-f25e05352fa1" 00:23:14.878 ], 00:23:14.878 "product_name": "Malloc disk", 00:23:14.878 "block_size": 512, 00:23:14.878 "num_blocks": 65536, 00:23:14.878 "uuid": "30009987-eb31-4ea2-b0da-f25e05352fa1", 00:23:14.878 "assigned_rate_limits": { 00:23:14.878 "rw_ios_per_sec": 0, 00:23:14.878 "rw_mbytes_per_sec": 0, 00:23:14.878 "r_mbytes_per_sec": 0, 00:23:14.878 "w_mbytes_per_sec": 0 00:23:14.878 }, 00:23:14.878 "claimed": true, 00:23:14.878 "claim_type": "exclusive_write", 00:23:14.879 "zoned": false, 00:23:14.879 "supported_io_types": { 00:23:14.879 "read": true, 00:23:14.879 "write": true, 00:23:14.879 "unmap": true, 00:23:14.879 "write_zeroes": true, 00:23:14.879 "flush": true, 00:23:14.879 "reset": true, 00:23:14.879 "compare": false, 00:23:14.879 "compare_and_write": false, 00:23:14.879 "abort": true, 00:23:14.879 "nvme_admin": false, 00:23:14.879 "nvme_io": false 00:23:14.879 }, 00:23:14.879 "memory_domains": [ 00:23:14.879 { 00:23:14.879 "dma_device_id": "system", 00:23:14.879 "dma_device_type": 1 00:23:14.879 }, 00:23:14.879 { 00:23:14.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.879 "dma_device_type": 2 00:23:14.879 } 00:23:14.879 ], 00:23:14.879 "driver_specific": {} 00:23:14.879 }' 00:23:14.879 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:14.879 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:15.136 12:05:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.393 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.393 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:15.393 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.393 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:15.393 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:15.651 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:15.651 "name": "BaseBdev2", 00:23:15.651 "aliases": [ 00:23:15.651 "396592c7-d725-449a-92df-d228b7711861" 00:23:15.651 ], 00:23:15.651 "product_name": "Malloc disk", 00:23:15.651 "block_size": 512, 00:23:15.651 "num_blocks": 65536, 00:23:15.651 "uuid": "396592c7-d725-449a-92df-d228b7711861", 00:23:15.651 "assigned_rate_limits": { 00:23:15.651 "rw_ios_per_sec": 0, 00:23:15.651 "rw_mbytes_per_sec": 0, 00:23:15.651 "r_mbytes_per_sec": 0, 00:23:15.651 "w_mbytes_per_sec": 0 00:23:15.651 }, 00:23:15.651 "claimed": true, 00:23:15.651 "claim_type": "exclusive_write", 00:23:15.651 "zoned": false, 00:23:15.651 "supported_io_types": { 00:23:15.651 "read": true, 00:23:15.651 "write": true, 00:23:15.651 "unmap": true, 00:23:15.651 "write_zeroes": true, 00:23:15.651 "flush": true, 00:23:15.651 "reset": true, 00:23:15.651 "compare": false, 00:23:15.651 "compare_and_write": false, 00:23:15.651 "abort": true, 00:23:15.651 "nvme_admin": false, 00:23:15.651 "nvme_io": false 00:23:15.651 }, 00:23:15.651 "memory_domains": [ 00:23:15.651 { 00:23:15.651 "dma_device_id": "system", 00:23:15.651 "dma_device_type": 1 00:23:15.651 }, 00:23:15.651 { 00:23:15.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:15.651 "dma_device_type": 2 00:23:15.651 } 00:23:15.651 ], 00:23:15.651 "driver_specific": {} 00:23:15.651 }' 00:23:15.651 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.651 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:15.651 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:15.651 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.651 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:15.909 12:05:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.475 "name": "BaseBdev3", 00:23:16.475 "aliases": [ 00:23:16.475 "2c65695b-8741-4371-b9d5-82ae3fd1992b" 00:23:16.475 ], 00:23:16.475 "product_name": "Malloc disk", 00:23:16.475 "block_size": 512, 00:23:16.475 "num_blocks": 65536, 00:23:16.475 "uuid": "2c65695b-8741-4371-b9d5-82ae3fd1992b", 00:23:16.475 "assigned_rate_limits": { 00:23:16.475 "rw_ios_per_sec": 0, 00:23:16.475 "rw_mbytes_per_sec": 0, 00:23:16.475 "r_mbytes_per_sec": 0, 00:23:16.475 "w_mbytes_per_sec": 0 00:23:16.475 }, 00:23:16.475 "claimed": true, 00:23:16.475 "claim_type": "exclusive_write", 00:23:16.475 "zoned": false, 00:23:16.475 "supported_io_types": { 00:23:16.475 "read": true, 00:23:16.475 "write": true, 00:23:16.475 "unmap": true, 00:23:16.475 "write_zeroes": true, 00:23:16.475 "flush": true, 00:23:16.475 "reset": true, 00:23:16.475 "compare": false, 00:23:16.475 "compare_and_write": false, 00:23:16.475 "abort": true, 00:23:16.475 "nvme_admin": false, 00:23:16.475 "nvme_io": false 00:23:16.475 }, 00:23:16.475 "memory_domains": [ 00:23:16.475 { 00:23:16.475 "dma_device_id": "system", 00:23:16.475 "dma_device_type": 1 00:23:16.475 }, 00:23:16.475 { 00:23:16.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.475 "dma_device_type": 2 00:23:16.475 } 00:23:16.475 ], 00:23:16.475 "driver_specific": {} 00:23:16.475 }' 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.475 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:16.734 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:16.734 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.734 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:16.734 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:16.734 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:16.734 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:16.734 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:16.993 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:16.993 "name": "BaseBdev4", 00:23:16.993 "aliases": [ 00:23:16.993 "ade65953-7554-4a4b-b3d8-b0e85f2117ba" 00:23:16.993 ], 00:23:16.993 "product_name": "Malloc disk", 00:23:16.993 "block_size": 512, 00:23:16.993 "num_blocks": 65536, 00:23:16.993 "uuid": "ade65953-7554-4a4b-b3d8-b0e85f2117ba", 00:23:16.993 "assigned_rate_limits": { 00:23:16.993 "rw_ios_per_sec": 0, 00:23:16.993 "rw_mbytes_per_sec": 0, 00:23:16.993 "r_mbytes_per_sec": 0, 00:23:16.993 "w_mbytes_per_sec": 0 00:23:16.993 }, 00:23:16.993 "claimed": true, 00:23:16.993 "claim_type": "exclusive_write", 00:23:16.993 "zoned": false, 00:23:16.993 "supported_io_types": { 00:23:16.993 "read": true, 00:23:16.993 "write": true, 00:23:16.993 "unmap": true, 00:23:16.993 "write_zeroes": true, 00:23:16.993 "flush": true, 00:23:16.993 "reset": true, 00:23:16.993 "compare": false, 00:23:16.993 "compare_and_write": false, 00:23:16.993 "abort": true, 00:23:16.993 "nvme_admin": false, 00:23:16.993 "nvme_io": false 00:23:16.993 }, 00:23:16.993 "memory_domains": [ 00:23:16.993 { 00:23:16.993 "dma_device_id": "system", 00:23:16.993 "dma_device_type": 1 00:23:16.993 }, 00:23:16.993 { 00:23:16.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:16.993 "dma_device_type": 2 00:23:16.993 } 00:23:16.993 ], 00:23:16.993 "driver_specific": {} 00:23:16.993 }' 00:23:16.993 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.993 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:16.993 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:16.993 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:16.993 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:17.251 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:17.252 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.252 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:17.252 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:17.252 12:05:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.252 12:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:17.252 12:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:17.252 12:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:17.510 [2024-07-21 12:05:16.345672] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:17.510 [2024-07-21 12:05:16.346546] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.510 [2024-07-21 12:05:16.346792] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.510 [2024-07-21 12:05:16.346985] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.510 [2024-07-21 12:05:16.347129] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:23:17.510 12:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 147807 00:23:17.510 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 147807 ']' 00:23:17.510 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 147807 00:23:17.510 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:23:17.510 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:17.510 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 147807 00:23:17.767 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:17.767 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:17.767 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 147807' 00:23:17.767 killing process with pid 147807 00:23:17.767 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 147807 00:23:17.767 [2024-07-21 12:05:16.387793] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.767 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 147807 00:23:17.767 [2024-07-21 12:05:16.429912] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:23:18.024 00:23:18.024 real 0m34.248s 00:23:18.024 user 1m5.195s 00:23:18.024 sys 0m4.056s 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:23:18.024 ************************************ 00:23:18.024 END TEST raid_state_function_test 00:23:18.024 ************************************ 00:23:18.024 12:05:16 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:23:18.024 12:05:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:23:18.024 12:05:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:18.024 12:05:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:18.024 ************************************ 00:23:18.024 START TEST raid_state_function_test_sb 00:23:18.024 ************************************ 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test concat 4 true 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:23:18.024 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=148906 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 148906' 00:23:18.025 Process raid pid: 148906 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 148906 /var/tmp/spdk-raid.sock 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 148906 ']' 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:18.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:18.025 12:05:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:18.025 [2024-07-21 12:05:16.800197] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:18.025 [2024-07-21 12:05:16.800683] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.282 [2024-07-21 12:05:16.962194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.282 [2024-07-21 12:05:17.058068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.282 [2024-07-21 12:05:17.117925] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:19.215 [2024-07-21 12:05:17.963441] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:19.215 [2024-07-21 12:05:17.963807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:19.215 [2024-07-21 12:05:17.963958] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:19.215 [2024-07-21 12:05:17.964027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:19.215 [2024-07-21 12:05:17.964230] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:19.215 [2024-07-21 12:05:17.964343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:19.215 [2024-07-21 12:05:17.964516] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:19.215 [2024-07-21 12:05:17.964587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.215 12:05:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:19.472 12:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:19.472 "name": "Existed_Raid", 00:23:19.472 "uuid": "74286f9b-0a1a-4250-8bc8-6d8d23f8afb3", 00:23:19.472 "strip_size_kb": 64, 00:23:19.472 "state": "configuring", 00:23:19.472 "raid_level": "concat", 00:23:19.472 "superblock": true, 00:23:19.472 "num_base_bdevs": 4, 00:23:19.472 "num_base_bdevs_discovered": 0, 00:23:19.472 "num_base_bdevs_operational": 4, 00:23:19.472 "base_bdevs_list": [ 00:23:19.472 { 00:23:19.472 "name": "BaseBdev1", 00:23:19.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.472 "is_configured": false, 00:23:19.472 "data_offset": 0, 00:23:19.472 "data_size": 0 00:23:19.472 }, 00:23:19.472 { 00:23:19.472 "name": "BaseBdev2", 00:23:19.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.472 "is_configured": false, 00:23:19.472 "data_offset": 0, 00:23:19.472 "data_size": 0 00:23:19.472 }, 00:23:19.472 { 00:23:19.472 "name": "BaseBdev3", 00:23:19.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.472 "is_configured": false, 00:23:19.472 "data_offset": 0, 00:23:19.472 "data_size": 0 00:23:19.472 }, 00:23:19.472 { 00:23:19.472 "name": "BaseBdev4", 00:23:19.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.472 "is_configured": false, 00:23:19.472 "data_offset": 0, 00:23:19.472 "data_size": 0 00:23:19.472 } 00:23:19.472 ] 00:23:19.472 }' 00:23:19.472 12:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:19.472 12:05:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:20.036 12:05:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:20.294 [2024-07-21 12:05:19.099464] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:20.294 [2024-07-21 12:05:19.099827] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:23:20.294 12:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:20.551 [2024-07-21 12:05:19.323555] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:20.551 [2024-07-21 12:05:19.323960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:20.551 [2024-07-21 12:05:19.324081] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:20.551 [2024-07-21 12:05:19.324184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:20.551 [2024-07-21 12:05:19.324446] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:20.551 [2024-07-21 12:05:19.324515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:20.551 [2024-07-21 12:05:19.324707] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:20.551 [2024-07-21 12:05:19.324788] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:20.551 12:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:20.809 [2024-07-21 12:05:19.602773] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:20.809 BaseBdev1 00:23:20.809 12:05:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:23:20.809 12:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:20.809 12:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:20.809 12:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:20.809 12:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:20.809 12:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:20.809 12:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:21.067 12:05:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:21.325 [ 00:23:21.325 { 00:23:21.325 "name": "BaseBdev1", 00:23:21.325 "aliases": [ 00:23:21.325 "2f088959-f410-46df-aca4-3ba79e0fa494" 00:23:21.325 ], 00:23:21.325 "product_name": "Malloc disk", 00:23:21.325 "block_size": 512, 00:23:21.325 "num_blocks": 65536, 00:23:21.325 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:21.325 "assigned_rate_limits": { 00:23:21.325 "rw_ios_per_sec": 0, 00:23:21.325 "rw_mbytes_per_sec": 0, 00:23:21.325 "r_mbytes_per_sec": 0, 00:23:21.325 "w_mbytes_per_sec": 0 00:23:21.325 }, 00:23:21.325 "claimed": true, 00:23:21.325 "claim_type": "exclusive_write", 00:23:21.325 "zoned": false, 00:23:21.325 "supported_io_types": { 00:23:21.325 "read": true, 00:23:21.325 "write": true, 00:23:21.325 "unmap": true, 00:23:21.325 "write_zeroes": true, 00:23:21.325 "flush": true, 00:23:21.325 "reset": true, 00:23:21.325 "compare": false, 00:23:21.325 "compare_and_write": false, 00:23:21.325 "abort": true, 00:23:21.325 "nvme_admin": false, 00:23:21.325 "nvme_io": false 00:23:21.325 }, 00:23:21.325 "memory_domains": [ 00:23:21.325 { 00:23:21.325 "dma_device_id": "system", 00:23:21.325 "dma_device_type": 1 00:23:21.325 }, 00:23:21.325 { 00:23:21.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:21.325 "dma_device_type": 2 00:23:21.325 } 00:23:21.325 ], 00:23:21.325 "driver_specific": {} 00:23:21.325 } 00:23:21.325 ] 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.325 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:21.583 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:21.583 "name": "Existed_Raid", 00:23:21.583 "uuid": "630d6051-44c6-4b5f-908f-aed0f51755b2", 00:23:21.583 "strip_size_kb": 64, 00:23:21.583 "state": "configuring", 00:23:21.583 "raid_level": "concat", 00:23:21.583 "superblock": true, 00:23:21.583 "num_base_bdevs": 4, 00:23:21.583 "num_base_bdevs_discovered": 1, 00:23:21.583 "num_base_bdevs_operational": 4, 00:23:21.583 "base_bdevs_list": [ 00:23:21.583 { 00:23:21.583 "name": "BaseBdev1", 00:23:21.583 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:21.583 "is_configured": true, 00:23:21.583 "data_offset": 2048, 00:23:21.583 "data_size": 63488 00:23:21.583 }, 00:23:21.583 { 00:23:21.583 "name": "BaseBdev2", 00:23:21.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.583 "is_configured": false, 00:23:21.583 "data_offset": 0, 00:23:21.583 "data_size": 0 00:23:21.583 }, 00:23:21.583 { 00:23:21.583 "name": "BaseBdev3", 00:23:21.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.583 "is_configured": false, 00:23:21.583 "data_offset": 0, 00:23:21.583 "data_size": 0 00:23:21.583 }, 00:23:21.583 { 00:23:21.583 "name": "BaseBdev4", 00:23:21.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:21.583 "is_configured": false, 00:23:21.583 "data_offset": 0, 00:23:21.583 "data_size": 0 00:23:21.583 } 00:23:21.583 ] 00:23:21.583 }' 00:23:21.583 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:21.583 12:05:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:22.147 12:05:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:22.405 [2024-07-21 12:05:21.243272] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:22.405 [2024-07-21 12:05:21.243558] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:23:22.405 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:22.662 [2024-07-21 12:05:21.467406] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:22.662 [2024-07-21 12:05:21.469894] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:22.662 [2024-07-21 12:05:21.470131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:22.662 [2024-07-21 12:05:21.470277] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:22.662 [2024-07-21 12:05:21.470350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:22.662 [2024-07-21 12:05:21.470499] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:22.662 [2024-07-21 12:05:21.470655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.662 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:22.920 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:22.920 "name": "Existed_Raid", 00:23:22.920 "uuid": "6dd17dec-cb66-4414-8dc2-26840242cf78", 00:23:22.920 "strip_size_kb": 64, 00:23:22.920 "state": "configuring", 00:23:22.920 "raid_level": "concat", 00:23:22.920 "superblock": true, 00:23:22.920 "num_base_bdevs": 4, 00:23:22.920 "num_base_bdevs_discovered": 1, 00:23:22.920 "num_base_bdevs_operational": 4, 00:23:22.920 "base_bdevs_list": [ 00:23:22.920 { 00:23:22.920 "name": "BaseBdev1", 00:23:22.920 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:22.920 "is_configured": true, 00:23:22.920 "data_offset": 2048, 00:23:22.920 "data_size": 63488 00:23:22.920 }, 00:23:22.920 { 00:23:22.920 "name": "BaseBdev2", 00:23:22.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.920 "is_configured": false, 00:23:22.920 "data_offset": 0, 00:23:22.920 "data_size": 0 00:23:22.920 }, 00:23:22.920 { 00:23:22.920 "name": "BaseBdev3", 00:23:22.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.920 "is_configured": false, 00:23:22.920 "data_offset": 0, 00:23:22.920 "data_size": 0 00:23:22.920 }, 00:23:22.920 { 00:23:22.920 "name": "BaseBdev4", 00:23:22.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.920 "is_configured": false, 00:23:22.920 "data_offset": 0, 00:23:22.920 "data_size": 0 00:23:22.920 } 00:23:22.920 ] 00:23:22.920 }' 00:23:22.920 12:05:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:22.920 12:05:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:23.486 12:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:23.744 [2024-07-21 12:05:22.562753] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:23.744 BaseBdev2 00:23:23.744 12:05:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:23:23.744 12:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:23.744 12:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:23.744 12:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:23.744 12:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:23.744 12:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:23.744 12:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:24.003 12:05:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:24.261 [ 00:23:24.261 { 00:23:24.261 "name": "BaseBdev2", 00:23:24.261 "aliases": [ 00:23:24.261 "6d50d644-7de1-4bc8-a746-2ef07fd00d90" 00:23:24.261 ], 00:23:24.261 "product_name": "Malloc disk", 00:23:24.261 "block_size": 512, 00:23:24.261 "num_blocks": 65536, 00:23:24.261 "uuid": "6d50d644-7de1-4bc8-a746-2ef07fd00d90", 00:23:24.261 "assigned_rate_limits": { 00:23:24.261 "rw_ios_per_sec": 0, 00:23:24.261 "rw_mbytes_per_sec": 0, 00:23:24.261 "r_mbytes_per_sec": 0, 00:23:24.261 "w_mbytes_per_sec": 0 00:23:24.261 }, 00:23:24.261 "claimed": true, 00:23:24.261 "claim_type": "exclusive_write", 00:23:24.261 "zoned": false, 00:23:24.261 "supported_io_types": { 00:23:24.261 "read": true, 00:23:24.261 "write": true, 00:23:24.261 "unmap": true, 00:23:24.261 "write_zeroes": true, 00:23:24.261 "flush": true, 00:23:24.261 "reset": true, 00:23:24.261 "compare": false, 00:23:24.261 "compare_and_write": false, 00:23:24.261 "abort": true, 00:23:24.261 "nvme_admin": false, 00:23:24.261 "nvme_io": false 00:23:24.261 }, 00:23:24.261 "memory_domains": [ 00:23:24.261 { 00:23:24.261 "dma_device_id": "system", 00:23:24.261 "dma_device_type": 1 00:23:24.261 }, 00:23:24.261 { 00:23:24.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.261 "dma_device_type": 2 00:23:24.261 } 00:23:24.261 ], 00:23:24.261 "driver_specific": {} 00:23:24.261 } 00:23:24.261 ] 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:24.261 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:24.519 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.519 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:24.519 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:24.519 "name": "Existed_Raid", 00:23:24.519 "uuid": "6dd17dec-cb66-4414-8dc2-26840242cf78", 00:23:24.519 "strip_size_kb": 64, 00:23:24.519 "state": "configuring", 00:23:24.519 "raid_level": "concat", 00:23:24.519 "superblock": true, 00:23:24.519 "num_base_bdevs": 4, 00:23:24.519 "num_base_bdevs_discovered": 2, 00:23:24.519 "num_base_bdevs_operational": 4, 00:23:24.519 "base_bdevs_list": [ 00:23:24.519 { 00:23:24.519 "name": "BaseBdev1", 00:23:24.519 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:24.519 "is_configured": true, 00:23:24.519 "data_offset": 2048, 00:23:24.519 "data_size": 63488 00:23:24.519 }, 00:23:24.519 { 00:23:24.519 "name": "BaseBdev2", 00:23:24.519 "uuid": "6d50d644-7de1-4bc8-a746-2ef07fd00d90", 00:23:24.519 "is_configured": true, 00:23:24.519 "data_offset": 2048, 00:23:24.519 "data_size": 63488 00:23:24.519 }, 00:23:24.519 { 00:23:24.519 "name": "BaseBdev3", 00:23:24.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.519 "is_configured": false, 00:23:24.519 "data_offset": 0, 00:23:24.519 "data_size": 0 00:23:24.519 }, 00:23:24.519 { 00:23:24.519 "name": "BaseBdev4", 00:23:24.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:24.519 "is_configured": false, 00:23:24.519 "data_offset": 0, 00:23:24.519 "data_size": 0 00:23:24.519 } 00:23:24.519 ] 00:23:24.519 }' 00:23:24.519 12:05:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:24.519 12:05:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:25.454 [2024-07-21 12:05:24.288556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:25.454 BaseBdev3 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:25.454 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:26.019 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:26.020 [ 00:23:26.020 { 00:23:26.020 "name": "BaseBdev3", 00:23:26.020 "aliases": [ 00:23:26.020 "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8" 00:23:26.020 ], 00:23:26.020 "product_name": "Malloc disk", 00:23:26.020 "block_size": 512, 00:23:26.020 "num_blocks": 65536, 00:23:26.020 "uuid": "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8", 00:23:26.020 "assigned_rate_limits": { 00:23:26.020 "rw_ios_per_sec": 0, 00:23:26.020 "rw_mbytes_per_sec": 0, 00:23:26.020 "r_mbytes_per_sec": 0, 00:23:26.020 "w_mbytes_per_sec": 0 00:23:26.020 }, 00:23:26.020 "claimed": true, 00:23:26.020 "claim_type": "exclusive_write", 00:23:26.020 "zoned": false, 00:23:26.020 "supported_io_types": { 00:23:26.020 "read": true, 00:23:26.020 "write": true, 00:23:26.020 "unmap": true, 00:23:26.020 "write_zeroes": true, 00:23:26.020 "flush": true, 00:23:26.020 "reset": true, 00:23:26.020 "compare": false, 00:23:26.020 "compare_and_write": false, 00:23:26.020 "abort": true, 00:23:26.020 "nvme_admin": false, 00:23:26.020 "nvme_io": false 00:23:26.020 }, 00:23:26.020 "memory_domains": [ 00:23:26.020 { 00:23:26.020 "dma_device_id": "system", 00:23:26.020 "dma_device_type": 1 00:23:26.020 }, 00:23:26.020 { 00:23:26.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:26.020 "dma_device_type": 2 00:23:26.020 } 00:23:26.020 ], 00:23:26.020 "driver_specific": {} 00:23:26.020 } 00:23:26.020 ] 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.020 12:05:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:26.278 12:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:26.278 "name": "Existed_Raid", 00:23:26.278 "uuid": "6dd17dec-cb66-4414-8dc2-26840242cf78", 00:23:26.278 "strip_size_kb": 64, 00:23:26.278 "state": "configuring", 00:23:26.278 "raid_level": "concat", 00:23:26.278 "superblock": true, 00:23:26.278 "num_base_bdevs": 4, 00:23:26.278 "num_base_bdevs_discovered": 3, 00:23:26.278 "num_base_bdevs_operational": 4, 00:23:26.278 "base_bdevs_list": [ 00:23:26.278 { 00:23:26.278 "name": "BaseBdev1", 00:23:26.278 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:26.278 "is_configured": true, 00:23:26.278 "data_offset": 2048, 00:23:26.278 "data_size": 63488 00:23:26.278 }, 00:23:26.278 { 00:23:26.278 "name": "BaseBdev2", 00:23:26.278 "uuid": "6d50d644-7de1-4bc8-a746-2ef07fd00d90", 00:23:26.278 "is_configured": true, 00:23:26.278 "data_offset": 2048, 00:23:26.278 "data_size": 63488 00:23:26.278 }, 00:23:26.278 { 00:23:26.278 "name": "BaseBdev3", 00:23:26.278 "uuid": "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8", 00:23:26.278 "is_configured": true, 00:23:26.278 "data_offset": 2048, 00:23:26.278 "data_size": 63488 00:23:26.278 }, 00:23:26.278 { 00:23:26.278 "name": "BaseBdev4", 00:23:26.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:26.278 "is_configured": false, 00:23:26.278 "data_offset": 0, 00:23:26.278 "data_size": 0 00:23:26.278 } 00:23:26.278 ] 00:23:26.278 }' 00:23:26.278 12:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:26.278 12:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:27.211 [2024-07-21 12:05:25.974029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:27.211 [2024-07-21 12:05:25.974623] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:23:27.211 [2024-07-21 12:05:25.974762] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:27.211 [2024-07-21 12:05:25.974974] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:27.211 [2024-07-21 12:05:25.975441] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:23:27.211 BaseBdev4 00:23:27.211 [2024-07-21 12:05:25.975501] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:23:27.211 [2024-07-21 12:05:25.975698] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:27.211 12:05:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:27.468 12:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:27.725 [ 00:23:27.725 { 00:23:27.725 "name": "BaseBdev4", 00:23:27.725 "aliases": [ 00:23:27.725 "40b3be8f-2249-4acf-a96c-a31ac511ea71" 00:23:27.725 ], 00:23:27.725 "product_name": "Malloc disk", 00:23:27.725 "block_size": 512, 00:23:27.725 "num_blocks": 65536, 00:23:27.725 "uuid": "40b3be8f-2249-4acf-a96c-a31ac511ea71", 00:23:27.725 "assigned_rate_limits": { 00:23:27.725 "rw_ios_per_sec": 0, 00:23:27.725 "rw_mbytes_per_sec": 0, 00:23:27.725 "r_mbytes_per_sec": 0, 00:23:27.725 "w_mbytes_per_sec": 0 00:23:27.725 }, 00:23:27.725 "claimed": true, 00:23:27.725 "claim_type": "exclusive_write", 00:23:27.725 "zoned": false, 00:23:27.725 "supported_io_types": { 00:23:27.725 "read": true, 00:23:27.725 "write": true, 00:23:27.725 "unmap": true, 00:23:27.725 "write_zeroes": true, 00:23:27.725 "flush": true, 00:23:27.725 "reset": true, 00:23:27.725 "compare": false, 00:23:27.725 "compare_and_write": false, 00:23:27.725 "abort": true, 00:23:27.725 "nvme_admin": false, 00:23:27.725 "nvme_io": false 00:23:27.725 }, 00:23:27.725 "memory_domains": [ 00:23:27.725 { 00:23:27.725 "dma_device_id": "system", 00:23:27.725 "dma_device_type": 1 00:23:27.725 }, 00:23:27.725 { 00:23:27.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:27.725 "dma_device_type": 2 00:23:27.725 } 00:23:27.725 ], 00:23:27.725 "driver_specific": {} 00:23:27.725 } 00:23:27.725 ] 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.725 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:27.982 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:27.982 "name": "Existed_Raid", 00:23:27.982 "uuid": "6dd17dec-cb66-4414-8dc2-26840242cf78", 00:23:27.982 "strip_size_kb": 64, 00:23:27.982 "state": "online", 00:23:27.982 "raid_level": "concat", 00:23:27.982 "superblock": true, 00:23:27.982 "num_base_bdevs": 4, 00:23:27.982 "num_base_bdevs_discovered": 4, 00:23:27.982 "num_base_bdevs_operational": 4, 00:23:27.982 "base_bdevs_list": [ 00:23:27.982 { 00:23:27.982 "name": "BaseBdev1", 00:23:27.982 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:27.982 "is_configured": true, 00:23:27.982 "data_offset": 2048, 00:23:27.982 "data_size": 63488 00:23:27.982 }, 00:23:27.982 { 00:23:27.982 "name": "BaseBdev2", 00:23:27.982 "uuid": "6d50d644-7de1-4bc8-a746-2ef07fd00d90", 00:23:27.982 "is_configured": true, 00:23:27.982 "data_offset": 2048, 00:23:27.982 "data_size": 63488 00:23:27.982 }, 00:23:27.982 { 00:23:27.982 "name": "BaseBdev3", 00:23:27.982 "uuid": "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8", 00:23:27.982 "is_configured": true, 00:23:27.982 "data_offset": 2048, 00:23:27.982 "data_size": 63488 00:23:27.982 }, 00:23:27.982 { 00:23:27.982 "name": "BaseBdev4", 00:23:27.982 "uuid": "40b3be8f-2249-4acf-a96c-a31ac511ea71", 00:23:27.982 "is_configured": true, 00:23:27.982 "data_offset": 2048, 00:23:27.982 "data_size": 63488 00:23:27.982 } 00:23:27.982 ] 00:23:27.982 }' 00:23:27.982 12:05:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:27.982 12:05:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:28.915 [2024-07-21 12:05:27.678783] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:28.915 "name": "Existed_Raid", 00:23:28.915 "aliases": [ 00:23:28.915 "6dd17dec-cb66-4414-8dc2-26840242cf78" 00:23:28.915 ], 00:23:28.915 "product_name": "Raid Volume", 00:23:28.915 "block_size": 512, 00:23:28.915 "num_blocks": 253952, 00:23:28.915 "uuid": "6dd17dec-cb66-4414-8dc2-26840242cf78", 00:23:28.915 "assigned_rate_limits": { 00:23:28.915 "rw_ios_per_sec": 0, 00:23:28.915 "rw_mbytes_per_sec": 0, 00:23:28.915 "r_mbytes_per_sec": 0, 00:23:28.915 "w_mbytes_per_sec": 0 00:23:28.915 }, 00:23:28.915 "claimed": false, 00:23:28.915 "zoned": false, 00:23:28.915 "supported_io_types": { 00:23:28.915 "read": true, 00:23:28.915 "write": true, 00:23:28.915 "unmap": true, 00:23:28.915 "write_zeroes": true, 00:23:28.915 "flush": true, 00:23:28.915 "reset": true, 00:23:28.915 "compare": false, 00:23:28.915 "compare_and_write": false, 00:23:28.915 "abort": false, 00:23:28.915 "nvme_admin": false, 00:23:28.915 "nvme_io": false 00:23:28.915 }, 00:23:28.915 "memory_domains": [ 00:23:28.915 { 00:23:28.915 "dma_device_id": "system", 00:23:28.915 "dma_device_type": 1 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.915 "dma_device_type": 2 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "dma_device_id": "system", 00:23:28.915 "dma_device_type": 1 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.915 "dma_device_type": 2 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "dma_device_id": "system", 00:23:28.915 "dma_device_type": 1 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.915 "dma_device_type": 2 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "dma_device_id": "system", 00:23:28.915 "dma_device_type": 1 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:28.915 "dma_device_type": 2 00:23:28.915 } 00:23:28.915 ], 00:23:28.915 "driver_specific": { 00:23:28.915 "raid": { 00:23:28.915 "uuid": "6dd17dec-cb66-4414-8dc2-26840242cf78", 00:23:28.915 "strip_size_kb": 64, 00:23:28.915 "state": "online", 00:23:28.915 "raid_level": "concat", 00:23:28.915 "superblock": true, 00:23:28.915 "num_base_bdevs": 4, 00:23:28.915 "num_base_bdevs_discovered": 4, 00:23:28.915 "num_base_bdevs_operational": 4, 00:23:28.915 "base_bdevs_list": [ 00:23:28.915 { 00:23:28.915 "name": "BaseBdev1", 00:23:28.915 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:28.915 "is_configured": true, 00:23:28.915 "data_offset": 2048, 00:23:28.915 "data_size": 63488 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "name": "BaseBdev2", 00:23:28.915 "uuid": "6d50d644-7de1-4bc8-a746-2ef07fd00d90", 00:23:28.915 "is_configured": true, 00:23:28.915 "data_offset": 2048, 00:23:28.915 "data_size": 63488 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "name": "BaseBdev3", 00:23:28.915 "uuid": "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8", 00:23:28.915 "is_configured": true, 00:23:28.915 "data_offset": 2048, 00:23:28.915 "data_size": 63488 00:23:28.915 }, 00:23:28.915 { 00:23:28.915 "name": "BaseBdev4", 00:23:28.915 "uuid": "40b3be8f-2249-4acf-a96c-a31ac511ea71", 00:23:28.915 "is_configured": true, 00:23:28.915 "data_offset": 2048, 00:23:28.915 "data_size": 63488 00:23:28.915 } 00:23:28.915 ] 00:23:28.915 } 00:23:28.915 } 00:23:28.915 }' 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:23:28.915 BaseBdev2 00:23:28.915 BaseBdev3 00:23:28.915 BaseBdev4' 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:23:28.915 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.173 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.173 "name": "BaseBdev1", 00:23:29.173 "aliases": [ 00:23:29.173 "2f088959-f410-46df-aca4-3ba79e0fa494" 00:23:29.173 ], 00:23:29.173 "product_name": "Malloc disk", 00:23:29.173 "block_size": 512, 00:23:29.173 "num_blocks": 65536, 00:23:29.173 "uuid": "2f088959-f410-46df-aca4-3ba79e0fa494", 00:23:29.173 "assigned_rate_limits": { 00:23:29.173 "rw_ios_per_sec": 0, 00:23:29.173 "rw_mbytes_per_sec": 0, 00:23:29.173 "r_mbytes_per_sec": 0, 00:23:29.173 "w_mbytes_per_sec": 0 00:23:29.173 }, 00:23:29.173 "claimed": true, 00:23:29.173 "claim_type": "exclusive_write", 00:23:29.173 "zoned": false, 00:23:29.173 "supported_io_types": { 00:23:29.173 "read": true, 00:23:29.173 "write": true, 00:23:29.173 "unmap": true, 00:23:29.173 "write_zeroes": true, 00:23:29.173 "flush": true, 00:23:29.173 "reset": true, 00:23:29.173 "compare": false, 00:23:29.173 "compare_and_write": false, 00:23:29.173 "abort": true, 00:23:29.173 "nvme_admin": false, 00:23:29.173 "nvme_io": false 00:23:29.173 }, 00:23:29.173 "memory_domains": [ 00:23:29.173 { 00:23:29.173 "dma_device_id": "system", 00:23:29.173 "dma_device_type": 1 00:23:29.173 }, 00:23:29.173 { 00:23:29.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.173 "dma_device_type": 2 00:23:29.173 } 00:23:29.173 ], 00:23:29.173 "driver_specific": {} 00:23:29.173 }' 00:23:29.173 12:05:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.173 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:29.431 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.688 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:29.688 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:29.688 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:29.688 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:29.688 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:29.946 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:29.946 "name": "BaseBdev2", 00:23:29.946 "aliases": [ 00:23:29.946 "6d50d644-7de1-4bc8-a746-2ef07fd00d90" 00:23:29.946 ], 00:23:29.946 "product_name": "Malloc disk", 00:23:29.946 "block_size": 512, 00:23:29.946 "num_blocks": 65536, 00:23:29.946 "uuid": "6d50d644-7de1-4bc8-a746-2ef07fd00d90", 00:23:29.946 "assigned_rate_limits": { 00:23:29.946 "rw_ios_per_sec": 0, 00:23:29.946 "rw_mbytes_per_sec": 0, 00:23:29.946 "r_mbytes_per_sec": 0, 00:23:29.946 "w_mbytes_per_sec": 0 00:23:29.946 }, 00:23:29.946 "claimed": true, 00:23:29.946 "claim_type": "exclusive_write", 00:23:29.946 "zoned": false, 00:23:29.946 "supported_io_types": { 00:23:29.946 "read": true, 00:23:29.946 "write": true, 00:23:29.946 "unmap": true, 00:23:29.946 "write_zeroes": true, 00:23:29.946 "flush": true, 00:23:29.946 "reset": true, 00:23:29.946 "compare": false, 00:23:29.946 "compare_and_write": false, 00:23:29.946 "abort": true, 00:23:29.946 "nvme_admin": false, 00:23:29.946 "nvme_io": false 00:23:29.946 }, 00:23:29.946 "memory_domains": [ 00:23:29.946 { 00:23:29.946 "dma_device_id": "system", 00:23:29.946 "dma_device_type": 1 00:23:29.946 }, 00:23:29.946 { 00:23:29.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:29.946 "dma_device_type": 2 00:23:29.946 } 00:23:29.947 ], 00:23:29.947 "driver_specific": {} 00:23:29.947 }' 00:23:29.947 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.947 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:29.947 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:29.947 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.205 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.205 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.205 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.205 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.205 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.205 12:05:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.205 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.462 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.462 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.462 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:30.462 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:30.720 "name": "BaseBdev3", 00:23:30.720 "aliases": [ 00:23:30.720 "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8" 00:23:30.720 ], 00:23:30.720 "product_name": "Malloc disk", 00:23:30.720 "block_size": 512, 00:23:30.720 "num_blocks": 65536, 00:23:30.720 "uuid": "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8", 00:23:30.720 "assigned_rate_limits": { 00:23:30.720 "rw_ios_per_sec": 0, 00:23:30.720 "rw_mbytes_per_sec": 0, 00:23:30.720 "r_mbytes_per_sec": 0, 00:23:30.720 "w_mbytes_per_sec": 0 00:23:30.720 }, 00:23:30.720 "claimed": true, 00:23:30.720 "claim_type": "exclusive_write", 00:23:30.720 "zoned": false, 00:23:30.720 "supported_io_types": { 00:23:30.720 "read": true, 00:23:30.720 "write": true, 00:23:30.720 "unmap": true, 00:23:30.720 "write_zeroes": true, 00:23:30.720 "flush": true, 00:23:30.720 "reset": true, 00:23:30.720 "compare": false, 00:23:30.720 "compare_and_write": false, 00:23:30.720 "abort": true, 00:23:30.720 "nvme_admin": false, 00:23:30.720 "nvme_io": false 00:23:30.720 }, 00:23:30.720 "memory_domains": [ 00:23:30.720 { 00:23:30.720 "dma_device_id": "system", 00:23:30.720 "dma_device_type": 1 00:23:30.720 }, 00:23:30.720 { 00:23:30.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:30.720 "dma_device_type": 2 00:23:30.720 } 00:23:30.720 ], 00:23:30.720 "driver_specific": {} 00:23:30.720 }' 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:30.720 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:30.978 12:05:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:31.236 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:31.236 "name": "BaseBdev4", 00:23:31.236 "aliases": [ 00:23:31.236 "40b3be8f-2249-4acf-a96c-a31ac511ea71" 00:23:31.236 ], 00:23:31.236 "product_name": "Malloc disk", 00:23:31.236 "block_size": 512, 00:23:31.236 "num_blocks": 65536, 00:23:31.236 "uuid": "40b3be8f-2249-4acf-a96c-a31ac511ea71", 00:23:31.236 "assigned_rate_limits": { 00:23:31.236 "rw_ios_per_sec": 0, 00:23:31.236 "rw_mbytes_per_sec": 0, 00:23:31.236 "r_mbytes_per_sec": 0, 00:23:31.236 "w_mbytes_per_sec": 0 00:23:31.236 }, 00:23:31.236 "claimed": true, 00:23:31.236 "claim_type": "exclusive_write", 00:23:31.236 "zoned": false, 00:23:31.236 "supported_io_types": { 00:23:31.236 "read": true, 00:23:31.236 "write": true, 00:23:31.236 "unmap": true, 00:23:31.236 "write_zeroes": true, 00:23:31.236 "flush": true, 00:23:31.236 "reset": true, 00:23:31.236 "compare": false, 00:23:31.236 "compare_and_write": false, 00:23:31.236 "abort": true, 00:23:31.236 "nvme_admin": false, 00:23:31.236 "nvme_io": false 00:23:31.236 }, 00:23:31.236 "memory_domains": [ 00:23:31.236 { 00:23:31.236 "dma_device_id": "system", 00:23:31.236 "dma_device_type": 1 00:23:31.236 }, 00:23:31.236 { 00:23:31.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:31.236 "dma_device_type": 2 00:23:31.236 } 00:23:31.236 ], 00:23:31.236 "driver_specific": {} 00:23:31.236 }' 00:23:31.236 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.495 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:31.495 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:31.495 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.495 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:31.495 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:31.495 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.495 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:31.753 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:31.753 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.753 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:31.753 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:31.753 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:32.012 [2024-07-21 12:05:30.731974] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:32.012 [2024-07-21 12:05:30.732341] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:32.012 [2024-07-21 12:05:30.732556] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:32.012 12:05:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.270 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:32.270 "name": "Existed_Raid", 00:23:32.270 "uuid": "6dd17dec-cb66-4414-8dc2-26840242cf78", 00:23:32.270 "strip_size_kb": 64, 00:23:32.270 "state": "offline", 00:23:32.270 "raid_level": "concat", 00:23:32.270 "superblock": true, 00:23:32.270 "num_base_bdevs": 4, 00:23:32.270 "num_base_bdevs_discovered": 3, 00:23:32.270 "num_base_bdevs_operational": 3, 00:23:32.270 "base_bdevs_list": [ 00:23:32.270 { 00:23:32.270 "name": null, 00:23:32.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.270 "is_configured": false, 00:23:32.270 "data_offset": 2048, 00:23:32.270 "data_size": 63488 00:23:32.270 }, 00:23:32.270 { 00:23:32.270 "name": "BaseBdev2", 00:23:32.270 "uuid": "6d50d644-7de1-4bc8-a746-2ef07fd00d90", 00:23:32.270 "is_configured": true, 00:23:32.270 "data_offset": 2048, 00:23:32.270 "data_size": 63488 00:23:32.270 }, 00:23:32.270 { 00:23:32.270 "name": "BaseBdev3", 00:23:32.270 "uuid": "8fe2b2ec-40b4-4b55-b5bc-457b665dffb8", 00:23:32.270 "is_configured": true, 00:23:32.270 "data_offset": 2048, 00:23:32.270 "data_size": 63488 00:23:32.270 }, 00:23:32.270 { 00:23:32.270 "name": "BaseBdev4", 00:23:32.270 "uuid": "40b3be8f-2249-4acf-a96c-a31ac511ea71", 00:23:32.270 "is_configured": true, 00:23:32.270 "data_offset": 2048, 00:23:32.270 "data_size": 63488 00:23:32.270 } 00:23:32.270 ] 00:23:32.270 }' 00:23:32.270 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:32.270 12:05:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:32.837 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:23:32.837 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:32.837 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.837 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:33.094 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:33.094 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.094 12:05:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:33.352 [2024-07-21 12:05:32.172226] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:33.352 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:33.352 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:33.352 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.352 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:33.919 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:33.919 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:33.919 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:33.919 [2024-07-21 12:05:32.759951] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:34.177 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:34.177 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:34.177 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.177 12:05:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:23:34.177 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:23:34.177 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:34.177 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:34.434 [2024-07-21 12:05:33.274903] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:34.434 [2024-07-21 12:05:33.275267] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:34.691 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:34.949 BaseBdev2 00:23:34.949 12:05:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:23:34.949 12:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:23:34.949 12:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:34.949 12:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:34.949 12:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:34.949 12:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:34.949 12:05:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:35.259 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:35.517 [ 00:23:35.517 { 00:23:35.517 "name": "BaseBdev2", 00:23:35.517 "aliases": [ 00:23:35.517 "287f4a8a-a247-4fe8-a178-408f50e8d721" 00:23:35.517 ], 00:23:35.517 "product_name": "Malloc disk", 00:23:35.517 "block_size": 512, 00:23:35.517 "num_blocks": 65536, 00:23:35.517 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:35.517 "assigned_rate_limits": { 00:23:35.517 "rw_ios_per_sec": 0, 00:23:35.517 "rw_mbytes_per_sec": 0, 00:23:35.517 "r_mbytes_per_sec": 0, 00:23:35.517 "w_mbytes_per_sec": 0 00:23:35.517 }, 00:23:35.517 "claimed": false, 00:23:35.517 "zoned": false, 00:23:35.517 "supported_io_types": { 00:23:35.517 "read": true, 00:23:35.517 "write": true, 00:23:35.517 "unmap": true, 00:23:35.517 "write_zeroes": true, 00:23:35.517 "flush": true, 00:23:35.517 "reset": true, 00:23:35.517 "compare": false, 00:23:35.517 "compare_and_write": false, 00:23:35.517 "abort": true, 00:23:35.517 "nvme_admin": false, 00:23:35.517 "nvme_io": false 00:23:35.517 }, 00:23:35.517 "memory_domains": [ 00:23:35.517 { 00:23:35.517 "dma_device_id": "system", 00:23:35.518 "dma_device_type": 1 00:23:35.518 }, 00:23:35.518 { 00:23:35.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:35.518 "dma_device_type": 2 00:23:35.518 } 00:23:35.518 ], 00:23:35.518 "driver_specific": {} 00:23:35.518 } 00:23:35.518 ] 00:23:35.518 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:35.518 12:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:35.518 12:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:35.518 12:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:35.812 BaseBdev3 00:23:35.812 12:05:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:23:35.812 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:23:35.812 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:35.812 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:35.812 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:35.812 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:35.812 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.068 12:05:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:36.325 [ 00:23:36.325 { 00:23:36.325 "name": "BaseBdev3", 00:23:36.325 "aliases": [ 00:23:36.325 "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049" 00:23:36.325 ], 00:23:36.325 "product_name": "Malloc disk", 00:23:36.325 "block_size": 512, 00:23:36.325 "num_blocks": 65536, 00:23:36.325 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:36.325 "assigned_rate_limits": { 00:23:36.325 "rw_ios_per_sec": 0, 00:23:36.325 "rw_mbytes_per_sec": 0, 00:23:36.325 "r_mbytes_per_sec": 0, 00:23:36.325 "w_mbytes_per_sec": 0 00:23:36.325 }, 00:23:36.325 "claimed": false, 00:23:36.325 "zoned": false, 00:23:36.325 "supported_io_types": { 00:23:36.325 "read": true, 00:23:36.325 "write": true, 00:23:36.325 "unmap": true, 00:23:36.325 "write_zeroes": true, 00:23:36.325 "flush": true, 00:23:36.325 "reset": true, 00:23:36.325 "compare": false, 00:23:36.325 "compare_and_write": false, 00:23:36.325 "abort": true, 00:23:36.325 "nvme_admin": false, 00:23:36.325 "nvme_io": false 00:23:36.325 }, 00:23:36.325 "memory_domains": [ 00:23:36.325 { 00:23:36.325 "dma_device_id": "system", 00:23:36.325 "dma_device_type": 1 00:23:36.325 }, 00:23:36.325 { 00:23:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:36.325 "dma_device_type": 2 00:23:36.325 } 00:23:36.325 ], 00:23:36.325 "driver_specific": {} 00:23:36.325 } 00:23:36.325 ] 00:23:36.325 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:36.325 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:36.326 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:36.326 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:36.583 BaseBdev4 00:23:36.583 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:23:36.583 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:23:36.584 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:36.584 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:36.584 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:36.584 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:36.584 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:36.841 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:37.099 [ 00:23:37.099 { 00:23:37.099 "name": "BaseBdev4", 00:23:37.099 "aliases": [ 00:23:37.099 "0743f83f-f955-47a8-ad56-61a3da5afaf1" 00:23:37.099 ], 00:23:37.099 "product_name": "Malloc disk", 00:23:37.099 "block_size": 512, 00:23:37.099 "num_blocks": 65536, 00:23:37.099 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:37.099 "assigned_rate_limits": { 00:23:37.099 "rw_ios_per_sec": 0, 00:23:37.099 "rw_mbytes_per_sec": 0, 00:23:37.099 "r_mbytes_per_sec": 0, 00:23:37.099 "w_mbytes_per_sec": 0 00:23:37.099 }, 00:23:37.099 "claimed": false, 00:23:37.099 "zoned": false, 00:23:37.099 "supported_io_types": { 00:23:37.099 "read": true, 00:23:37.099 "write": true, 00:23:37.099 "unmap": true, 00:23:37.099 "write_zeroes": true, 00:23:37.099 "flush": true, 00:23:37.099 "reset": true, 00:23:37.099 "compare": false, 00:23:37.099 "compare_and_write": false, 00:23:37.099 "abort": true, 00:23:37.099 "nvme_admin": false, 00:23:37.099 "nvme_io": false 00:23:37.099 }, 00:23:37.099 "memory_domains": [ 00:23:37.099 { 00:23:37.099 "dma_device_id": "system", 00:23:37.099 "dma_device_type": 1 00:23:37.099 }, 00:23:37.099 { 00:23:37.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:37.099 "dma_device_type": 2 00:23:37.099 } 00:23:37.099 ], 00:23:37.099 "driver_specific": {} 00:23:37.099 } 00:23:37.099 ] 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:37.099 [2024-07-21 12:05:35.945821] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:37.099 [2024-07-21 12:05:35.946083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:37.099 [2024-07-21 12:05:35.946237] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:37.099 [2024-07-21 12:05:35.948618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:37.099 [2024-07-21 12:05:35.948826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:37.099 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:37.357 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:37.357 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:37.357 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:37.357 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:37.357 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:37.357 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.357 12:05:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:37.615 12:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:37.615 "name": "Existed_Raid", 00:23:37.615 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:37.615 "strip_size_kb": 64, 00:23:37.615 "state": "configuring", 00:23:37.615 "raid_level": "concat", 00:23:37.615 "superblock": true, 00:23:37.615 "num_base_bdevs": 4, 00:23:37.615 "num_base_bdevs_discovered": 3, 00:23:37.615 "num_base_bdevs_operational": 4, 00:23:37.615 "base_bdevs_list": [ 00:23:37.615 { 00:23:37.615 "name": "BaseBdev1", 00:23:37.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:37.615 "is_configured": false, 00:23:37.615 "data_offset": 0, 00:23:37.615 "data_size": 0 00:23:37.615 }, 00:23:37.615 { 00:23:37.615 "name": "BaseBdev2", 00:23:37.615 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:37.615 "is_configured": true, 00:23:37.615 "data_offset": 2048, 00:23:37.615 "data_size": 63488 00:23:37.615 }, 00:23:37.615 { 00:23:37.615 "name": "BaseBdev3", 00:23:37.615 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:37.615 "is_configured": true, 00:23:37.615 "data_offset": 2048, 00:23:37.615 "data_size": 63488 00:23:37.615 }, 00:23:37.615 { 00:23:37.615 "name": "BaseBdev4", 00:23:37.615 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:37.615 "is_configured": true, 00:23:37.615 "data_offset": 2048, 00:23:37.615 "data_size": 63488 00:23:37.615 } 00:23:37.615 ] 00:23:37.615 }' 00:23:37.615 12:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:37.615 12:05:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:38.181 12:05:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:23:38.449 [2024-07-21 12:05:37.178038] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.449 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:38.707 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:38.707 "name": "Existed_Raid", 00:23:38.707 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:38.707 "strip_size_kb": 64, 00:23:38.707 "state": "configuring", 00:23:38.707 "raid_level": "concat", 00:23:38.707 "superblock": true, 00:23:38.707 "num_base_bdevs": 4, 00:23:38.707 "num_base_bdevs_discovered": 2, 00:23:38.707 "num_base_bdevs_operational": 4, 00:23:38.707 "base_bdevs_list": [ 00:23:38.707 { 00:23:38.707 "name": "BaseBdev1", 00:23:38.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:38.707 "is_configured": false, 00:23:38.707 "data_offset": 0, 00:23:38.707 "data_size": 0 00:23:38.707 }, 00:23:38.707 { 00:23:38.707 "name": null, 00:23:38.707 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:38.707 "is_configured": false, 00:23:38.707 "data_offset": 2048, 00:23:38.707 "data_size": 63488 00:23:38.707 }, 00:23:38.707 { 00:23:38.707 "name": "BaseBdev3", 00:23:38.707 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:38.707 "is_configured": true, 00:23:38.707 "data_offset": 2048, 00:23:38.707 "data_size": 63488 00:23:38.707 }, 00:23:38.707 { 00:23:38.707 "name": "BaseBdev4", 00:23:38.707 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:38.707 "is_configured": true, 00:23:38.707 "data_offset": 2048, 00:23:38.708 "data_size": 63488 00:23:38.708 } 00:23:38.708 ] 00:23:38.708 }' 00:23:38.708 12:05:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:38.708 12:05:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:39.272 12:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.272 12:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:39.529 12:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:23:39.529 12:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:39.786 [2024-07-21 12:05:38.555287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:39.786 BaseBdev1 00:23:39.786 12:05:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:23:39.786 12:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:23:39.786 12:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:39.786 12:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:39.786 12:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:39.786 12:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:39.786 12:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:40.044 12:05:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:40.302 [ 00:23:40.302 { 00:23:40.302 "name": "BaseBdev1", 00:23:40.302 "aliases": [ 00:23:40.302 "754d5441-0df4-4751-981b-761d8e373737" 00:23:40.302 ], 00:23:40.302 "product_name": "Malloc disk", 00:23:40.302 "block_size": 512, 00:23:40.302 "num_blocks": 65536, 00:23:40.302 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:40.302 "assigned_rate_limits": { 00:23:40.302 "rw_ios_per_sec": 0, 00:23:40.302 "rw_mbytes_per_sec": 0, 00:23:40.302 "r_mbytes_per_sec": 0, 00:23:40.302 "w_mbytes_per_sec": 0 00:23:40.302 }, 00:23:40.302 "claimed": true, 00:23:40.302 "claim_type": "exclusive_write", 00:23:40.302 "zoned": false, 00:23:40.302 "supported_io_types": { 00:23:40.302 "read": true, 00:23:40.302 "write": true, 00:23:40.302 "unmap": true, 00:23:40.302 "write_zeroes": true, 00:23:40.302 "flush": true, 00:23:40.302 "reset": true, 00:23:40.302 "compare": false, 00:23:40.302 "compare_and_write": false, 00:23:40.302 "abort": true, 00:23:40.302 "nvme_admin": false, 00:23:40.302 "nvme_io": false 00:23:40.302 }, 00:23:40.303 "memory_domains": [ 00:23:40.303 { 00:23:40.303 "dma_device_id": "system", 00:23:40.303 "dma_device_type": 1 00:23:40.303 }, 00:23:40.303 { 00:23:40.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:40.303 "dma_device_type": 2 00:23:40.303 } 00:23:40.303 ], 00:23:40.303 "driver_specific": {} 00:23:40.303 } 00:23:40.303 ] 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:40.303 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.561 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:40.561 "name": "Existed_Raid", 00:23:40.561 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:40.561 "strip_size_kb": 64, 00:23:40.561 "state": "configuring", 00:23:40.561 "raid_level": "concat", 00:23:40.561 "superblock": true, 00:23:40.561 "num_base_bdevs": 4, 00:23:40.561 "num_base_bdevs_discovered": 3, 00:23:40.561 "num_base_bdevs_operational": 4, 00:23:40.561 "base_bdevs_list": [ 00:23:40.561 { 00:23:40.561 "name": "BaseBdev1", 00:23:40.561 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:40.561 "is_configured": true, 00:23:40.561 "data_offset": 2048, 00:23:40.561 "data_size": 63488 00:23:40.561 }, 00:23:40.561 { 00:23:40.561 "name": null, 00:23:40.561 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:40.561 "is_configured": false, 00:23:40.561 "data_offset": 2048, 00:23:40.561 "data_size": 63488 00:23:40.561 }, 00:23:40.561 { 00:23:40.561 "name": "BaseBdev3", 00:23:40.561 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:40.561 "is_configured": true, 00:23:40.561 "data_offset": 2048, 00:23:40.561 "data_size": 63488 00:23:40.561 }, 00:23:40.561 { 00:23:40.561 "name": "BaseBdev4", 00:23:40.561 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:40.561 "is_configured": true, 00:23:40.561 "data_offset": 2048, 00:23:40.561 "data_size": 63488 00:23:40.561 } 00:23:40.561 ] 00:23:40.561 }' 00:23:40.561 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:40.561 12:05:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:41.127 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.127 12:05:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:41.386 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:23:41.386 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:23:41.644 [2024-07-21 12:05:40.355767] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.644 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:41.902 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:41.902 "name": "Existed_Raid", 00:23:41.902 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:41.902 "strip_size_kb": 64, 00:23:41.902 "state": "configuring", 00:23:41.902 "raid_level": "concat", 00:23:41.902 "superblock": true, 00:23:41.902 "num_base_bdevs": 4, 00:23:41.902 "num_base_bdevs_discovered": 2, 00:23:41.902 "num_base_bdevs_operational": 4, 00:23:41.902 "base_bdevs_list": [ 00:23:41.902 { 00:23:41.902 "name": "BaseBdev1", 00:23:41.902 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:41.902 "is_configured": true, 00:23:41.902 "data_offset": 2048, 00:23:41.902 "data_size": 63488 00:23:41.902 }, 00:23:41.902 { 00:23:41.902 "name": null, 00:23:41.902 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:41.902 "is_configured": false, 00:23:41.902 "data_offset": 2048, 00:23:41.902 "data_size": 63488 00:23:41.902 }, 00:23:41.902 { 00:23:41.902 "name": null, 00:23:41.902 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:41.902 "is_configured": false, 00:23:41.902 "data_offset": 2048, 00:23:41.902 "data_size": 63488 00:23:41.902 }, 00:23:41.902 { 00:23:41.902 "name": "BaseBdev4", 00:23:41.902 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:41.902 "is_configured": true, 00:23:41.902 "data_offset": 2048, 00:23:41.902 "data_size": 63488 00:23:41.902 } 00:23:41.902 ] 00:23:41.902 }' 00:23:41.902 12:05:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:41.902 12:05:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:42.467 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.467 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:42.725 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:23:42.725 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:23:42.984 [2024-07-21 12:05:41.772180] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.984 12:05:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:43.242 12:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:43.242 "name": "Existed_Raid", 00:23:43.242 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:43.242 "strip_size_kb": 64, 00:23:43.242 "state": "configuring", 00:23:43.243 "raid_level": "concat", 00:23:43.243 "superblock": true, 00:23:43.243 "num_base_bdevs": 4, 00:23:43.243 "num_base_bdevs_discovered": 3, 00:23:43.243 "num_base_bdevs_operational": 4, 00:23:43.243 "base_bdevs_list": [ 00:23:43.243 { 00:23:43.243 "name": "BaseBdev1", 00:23:43.243 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:43.243 "is_configured": true, 00:23:43.243 "data_offset": 2048, 00:23:43.243 "data_size": 63488 00:23:43.243 }, 00:23:43.243 { 00:23:43.243 "name": null, 00:23:43.243 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:43.243 "is_configured": false, 00:23:43.243 "data_offset": 2048, 00:23:43.243 "data_size": 63488 00:23:43.243 }, 00:23:43.243 { 00:23:43.243 "name": "BaseBdev3", 00:23:43.243 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:43.243 "is_configured": true, 00:23:43.243 "data_offset": 2048, 00:23:43.243 "data_size": 63488 00:23:43.243 }, 00:23:43.243 { 00:23:43.243 "name": "BaseBdev4", 00:23:43.243 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:43.243 "is_configured": true, 00:23:43.243 "data_offset": 2048, 00:23:43.243 "data_size": 63488 00:23:43.243 } 00:23:43.243 ] 00:23:43.243 }' 00:23:43.243 12:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:43.243 12:05:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:43.810 12:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.810 12:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:23:44.069 12:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:23:44.069 12:05:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:44.326 [2024-07-21 12:05:43.132491] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.326 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:44.583 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:44.583 "name": "Existed_Raid", 00:23:44.583 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:44.583 "strip_size_kb": 64, 00:23:44.583 "state": "configuring", 00:23:44.583 "raid_level": "concat", 00:23:44.583 "superblock": true, 00:23:44.583 "num_base_bdevs": 4, 00:23:44.583 "num_base_bdevs_discovered": 2, 00:23:44.583 "num_base_bdevs_operational": 4, 00:23:44.583 "base_bdevs_list": [ 00:23:44.583 { 00:23:44.583 "name": null, 00:23:44.583 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:44.583 "is_configured": false, 00:23:44.583 "data_offset": 2048, 00:23:44.583 "data_size": 63488 00:23:44.583 }, 00:23:44.583 { 00:23:44.583 "name": null, 00:23:44.583 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:44.583 "is_configured": false, 00:23:44.583 "data_offset": 2048, 00:23:44.583 "data_size": 63488 00:23:44.583 }, 00:23:44.583 { 00:23:44.583 "name": "BaseBdev3", 00:23:44.583 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:44.583 "is_configured": true, 00:23:44.583 "data_offset": 2048, 00:23:44.583 "data_size": 63488 00:23:44.583 }, 00:23:44.583 { 00:23:44.583 "name": "BaseBdev4", 00:23:44.583 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:44.583 "is_configured": true, 00:23:44.583 "data_offset": 2048, 00:23:44.583 "data_size": 63488 00:23:44.583 } 00:23:44.583 ] 00:23:44.583 }' 00:23:44.583 12:05:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:44.583 12:05:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:45.514 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.514 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:23:45.514 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:23:45.514 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:23:45.771 [2024-07-21 12:05:44.487090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.771 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:46.029 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:46.029 "name": "Existed_Raid", 00:23:46.029 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:46.029 "strip_size_kb": 64, 00:23:46.029 "state": "configuring", 00:23:46.029 "raid_level": "concat", 00:23:46.029 "superblock": true, 00:23:46.029 "num_base_bdevs": 4, 00:23:46.029 "num_base_bdevs_discovered": 3, 00:23:46.029 "num_base_bdevs_operational": 4, 00:23:46.029 "base_bdevs_list": [ 00:23:46.029 { 00:23:46.029 "name": null, 00:23:46.029 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:46.029 "is_configured": false, 00:23:46.029 "data_offset": 2048, 00:23:46.029 "data_size": 63488 00:23:46.029 }, 00:23:46.029 { 00:23:46.029 "name": "BaseBdev2", 00:23:46.029 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:46.029 "is_configured": true, 00:23:46.029 "data_offset": 2048, 00:23:46.029 "data_size": 63488 00:23:46.029 }, 00:23:46.029 { 00:23:46.029 "name": "BaseBdev3", 00:23:46.029 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:46.029 "is_configured": true, 00:23:46.029 "data_offset": 2048, 00:23:46.029 "data_size": 63488 00:23:46.029 }, 00:23:46.029 { 00:23:46.029 "name": "BaseBdev4", 00:23:46.029 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:46.029 "is_configured": true, 00:23:46.029 "data_offset": 2048, 00:23:46.029 "data_size": 63488 00:23:46.029 } 00:23:46.029 ] 00:23:46.029 }' 00:23:46.029 12:05:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:46.029 12:05:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:46.593 12:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:46.593 12:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:23:46.850 12:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:23:46.850 12:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:23:46.850 12:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.107 12:05:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 754d5441-0df4-4751-981b-761d8e373737 00:23:47.364 [2024-07-21 12:05:46.116325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:23:47.364 [2024-07-21 12:05:46.116870] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:23:47.364 [2024-07-21 12:05:46.117011] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:47.364 [2024-07-21 12:05:46.117139] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:47.364 [2024-07-21 12:05:46.117535] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:23:47.364 [2024-07-21 12:05:46.117672] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:23:47.364 NewBaseBdev 00:23:47.364 [2024-07-21 12:05:46.117913] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:47.364 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:23:47.364 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:23:47.364 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:23:47.364 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:23:47.364 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:23:47.364 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:23:47.364 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:47.626 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:23:47.884 [ 00:23:47.884 { 00:23:47.884 "name": "NewBaseBdev", 00:23:47.884 "aliases": [ 00:23:47.884 "754d5441-0df4-4751-981b-761d8e373737" 00:23:47.884 ], 00:23:47.884 "product_name": "Malloc disk", 00:23:47.884 "block_size": 512, 00:23:47.884 "num_blocks": 65536, 00:23:47.884 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:47.884 "assigned_rate_limits": { 00:23:47.884 "rw_ios_per_sec": 0, 00:23:47.884 "rw_mbytes_per_sec": 0, 00:23:47.884 "r_mbytes_per_sec": 0, 00:23:47.884 "w_mbytes_per_sec": 0 00:23:47.884 }, 00:23:47.884 "claimed": true, 00:23:47.884 "claim_type": "exclusive_write", 00:23:47.884 "zoned": false, 00:23:47.884 "supported_io_types": { 00:23:47.884 "read": true, 00:23:47.884 "write": true, 00:23:47.884 "unmap": true, 00:23:47.884 "write_zeroes": true, 00:23:47.884 "flush": true, 00:23:47.884 "reset": true, 00:23:47.884 "compare": false, 00:23:47.884 "compare_and_write": false, 00:23:47.884 "abort": true, 00:23:47.884 "nvme_admin": false, 00:23:47.884 "nvme_io": false 00:23:47.884 }, 00:23:47.884 "memory_domains": [ 00:23:47.884 { 00:23:47.884 "dma_device_id": "system", 00:23:47.884 "dma_device_type": 1 00:23:47.884 }, 00:23:47.884 { 00:23:47.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:47.884 "dma_device_type": 2 00:23:47.884 } 00:23:47.884 ], 00:23:47.884 "driver_specific": {} 00:23:47.884 } 00:23:47.884 ] 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.884 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:48.142 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:48.142 "name": "Existed_Raid", 00:23:48.142 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:48.142 "strip_size_kb": 64, 00:23:48.142 "state": "online", 00:23:48.142 "raid_level": "concat", 00:23:48.142 "superblock": true, 00:23:48.142 "num_base_bdevs": 4, 00:23:48.142 "num_base_bdevs_discovered": 4, 00:23:48.142 "num_base_bdevs_operational": 4, 00:23:48.142 "base_bdevs_list": [ 00:23:48.142 { 00:23:48.142 "name": "NewBaseBdev", 00:23:48.142 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:48.142 "is_configured": true, 00:23:48.142 "data_offset": 2048, 00:23:48.142 "data_size": 63488 00:23:48.142 }, 00:23:48.142 { 00:23:48.142 "name": "BaseBdev2", 00:23:48.142 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:48.142 "is_configured": true, 00:23:48.142 "data_offset": 2048, 00:23:48.142 "data_size": 63488 00:23:48.142 }, 00:23:48.142 { 00:23:48.142 "name": "BaseBdev3", 00:23:48.142 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:48.142 "is_configured": true, 00:23:48.142 "data_offset": 2048, 00:23:48.142 "data_size": 63488 00:23:48.142 }, 00:23:48.142 { 00:23:48.142 "name": "BaseBdev4", 00:23:48.142 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:48.142 "is_configured": true, 00:23:48.142 "data_offset": 2048, 00:23:48.142 "data_size": 63488 00:23:48.142 } 00:23:48.142 ] 00:23:48.142 }' 00:23:48.142 12:05:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:48.142 12:05:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:23:48.707 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:48.964 [2024-07-21 12:05:47.781081] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:48.964 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:48.964 "name": "Existed_Raid", 00:23:48.964 "aliases": [ 00:23:48.964 "f18402ad-0ffd-488b-af8b-448fe91f21b7" 00:23:48.964 ], 00:23:48.964 "product_name": "Raid Volume", 00:23:48.964 "block_size": 512, 00:23:48.964 "num_blocks": 253952, 00:23:48.964 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:48.964 "assigned_rate_limits": { 00:23:48.964 "rw_ios_per_sec": 0, 00:23:48.964 "rw_mbytes_per_sec": 0, 00:23:48.964 "r_mbytes_per_sec": 0, 00:23:48.964 "w_mbytes_per_sec": 0 00:23:48.964 }, 00:23:48.964 "claimed": false, 00:23:48.964 "zoned": false, 00:23:48.964 "supported_io_types": { 00:23:48.964 "read": true, 00:23:48.964 "write": true, 00:23:48.964 "unmap": true, 00:23:48.964 "write_zeroes": true, 00:23:48.964 "flush": true, 00:23:48.964 "reset": true, 00:23:48.964 "compare": false, 00:23:48.964 "compare_and_write": false, 00:23:48.964 "abort": false, 00:23:48.964 "nvme_admin": false, 00:23:48.964 "nvme_io": false 00:23:48.964 }, 00:23:48.964 "memory_domains": [ 00:23:48.964 { 00:23:48.964 "dma_device_id": "system", 00:23:48.964 "dma_device_type": 1 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.964 "dma_device_type": 2 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "dma_device_id": "system", 00:23:48.964 "dma_device_type": 1 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.964 "dma_device_type": 2 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "dma_device_id": "system", 00:23:48.964 "dma_device_type": 1 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.964 "dma_device_type": 2 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "dma_device_id": "system", 00:23:48.964 "dma_device_type": 1 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:48.964 "dma_device_type": 2 00:23:48.964 } 00:23:48.964 ], 00:23:48.964 "driver_specific": { 00:23:48.964 "raid": { 00:23:48.964 "uuid": "f18402ad-0ffd-488b-af8b-448fe91f21b7", 00:23:48.964 "strip_size_kb": 64, 00:23:48.964 "state": "online", 00:23:48.964 "raid_level": "concat", 00:23:48.964 "superblock": true, 00:23:48.964 "num_base_bdevs": 4, 00:23:48.964 "num_base_bdevs_discovered": 4, 00:23:48.964 "num_base_bdevs_operational": 4, 00:23:48.964 "base_bdevs_list": [ 00:23:48.964 { 00:23:48.964 "name": "NewBaseBdev", 00:23:48.964 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:48.964 "is_configured": true, 00:23:48.964 "data_offset": 2048, 00:23:48.964 "data_size": 63488 00:23:48.964 }, 00:23:48.964 { 00:23:48.964 "name": "BaseBdev2", 00:23:48.964 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:48.964 "is_configured": true, 00:23:48.965 "data_offset": 2048, 00:23:48.965 "data_size": 63488 00:23:48.965 }, 00:23:48.965 { 00:23:48.965 "name": "BaseBdev3", 00:23:48.965 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:48.965 "is_configured": true, 00:23:48.965 "data_offset": 2048, 00:23:48.965 "data_size": 63488 00:23:48.965 }, 00:23:48.965 { 00:23:48.965 "name": "BaseBdev4", 00:23:48.965 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:48.965 "is_configured": true, 00:23:48.965 "data_offset": 2048, 00:23:48.965 "data_size": 63488 00:23:48.965 } 00:23:48.965 ] 00:23:48.965 } 00:23:48.965 } 00:23:48.965 }' 00:23:48.965 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:49.222 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:23:49.222 BaseBdev2 00:23:49.222 BaseBdev3 00:23:49.222 BaseBdev4' 00:23:49.222 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:49.222 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:23:49.222 12:05:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:49.479 "name": "NewBaseBdev", 00:23:49.479 "aliases": [ 00:23:49.479 "754d5441-0df4-4751-981b-761d8e373737" 00:23:49.479 ], 00:23:49.479 "product_name": "Malloc disk", 00:23:49.479 "block_size": 512, 00:23:49.479 "num_blocks": 65536, 00:23:49.479 "uuid": "754d5441-0df4-4751-981b-761d8e373737", 00:23:49.479 "assigned_rate_limits": { 00:23:49.479 "rw_ios_per_sec": 0, 00:23:49.479 "rw_mbytes_per_sec": 0, 00:23:49.479 "r_mbytes_per_sec": 0, 00:23:49.479 "w_mbytes_per_sec": 0 00:23:49.479 }, 00:23:49.479 "claimed": true, 00:23:49.479 "claim_type": "exclusive_write", 00:23:49.479 "zoned": false, 00:23:49.479 "supported_io_types": { 00:23:49.479 "read": true, 00:23:49.479 "write": true, 00:23:49.479 "unmap": true, 00:23:49.479 "write_zeroes": true, 00:23:49.479 "flush": true, 00:23:49.479 "reset": true, 00:23:49.479 "compare": false, 00:23:49.479 "compare_and_write": false, 00:23:49.479 "abort": true, 00:23:49.479 "nvme_admin": false, 00:23:49.479 "nvme_io": false 00:23:49.479 }, 00:23:49.479 "memory_domains": [ 00:23:49.479 { 00:23:49.479 "dma_device_id": "system", 00:23:49.479 "dma_device_type": 1 00:23:49.479 }, 00:23:49.479 { 00:23:49.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:49.479 "dma_device_type": 2 00:23:49.479 } 00:23:49.479 ], 00:23:49.479 "driver_specific": {} 00:23:49.479 }' 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:49.479 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:49.737 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:49.737 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:49.737 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:49.737 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:49.737 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:49.737 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:23:49.737 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:50.015 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:50.015 "name": "BaseBdev2", 00:23:50.015 "aliases": [ 00:23:50.015 "287f4a8a-a247-4fe8-a178-408f50e8d721" 00:23:50.015 ], 00:23:50.015 "product_name": "Malloc disk", 00:23:50.015 "block_size": 512, 00:23:50.015 "num_blocks": 65536, 00:23:50.015 "uuid": "287f4a8a-a247-4fe8-a178-408f50e8d721", 00:23:50.015 "assigned_rate_limits": { 00:23:50.015 "rw_ios_per_sec": 0, 00:23:50.015 "rw_mbytes_per_sec": 0, 00:23:50.015 "r_mbytes_per_sec": 0, 00:23:50.015 "w_mbytes_per_sec": 0 00:23:50.015 }, 00:23:50.015 "claimed": true, 00:23:50.015 "claim_type": "exclusive_write", 00:23:50.015 "zoned": false, 00:23:50.015 "supported_io_types": { 00:23:50.015 "read": true, 00:23:50.015 "write": true, 00:23:50.015 "unmap": true, 00:23:50.015 "write_zeroes": true, 00:23:50.015 "flush": true, 00:23:50.015 "reset": true, 00:23:50.015 "compare": false, 00:23:50.015 "compare_and_write": false, 00:23:50.015 "abort": true, 00:23:50.015 "nvme_admin": false, 00:23:50.015 "nvme_io": false 00:23:50.015 }, 00:23:50.015 "memory_domains": [ 00:23:50.015 { 00:23:50.015 "dma_device_id": "system", 00:23:50.015 "dma_device_type": 1 00:23:50.015 }, 00:23:50.015 { 00:23:50.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.015 "dma_device_type": 2 00:23:50.015 } 00:23:50.015 ], 00:23:50.015 "driver_specific": {} 00:23:50.015 }' 00:23:50.015 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.015 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.015 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:50.015 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.015 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.307 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:50.307 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.307 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.307 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:50.307 12:05:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:50.307 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:50.307 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:50.307 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:50.307 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:23:50.307 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:50.565 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:50.565 "name": "BaseBdev3", 00:23:50.565 "aliases": [ 00:23:50.565 "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049" 00:23:50.565 ], 00:23:50.565 "product_name": "Malloc disk", 00:23:50.565 "block_size": 512, 00:23:50.565 "num_blocks": 65536, 00:23:50.565 "uuid": "5437ac1d-1c7f-4e77-9e04-e9b13a6d9049", 00:23:50.565 "assigned_rate_limits": { 00:23:50.565 "rw_ios_per_sec": 0, 00:23:50.565 "rw_mbytes_per_sec": 0, 00:23:50.565 "r_mbytes_per_sec": 0, 00:23:50.565 "w_mbytes_per_sec": 0 00:23:50.565 }, 00:23:50.565 "claimed": true, 00:23:50.565 "claim_type": "exclusive_write", 00:23:50.565 "zoned": false, 00:23:50.565 "supported_io_types": { 00:23:50.565 "read": true, 00:23:50.565 "write": true, 00:23:50.565 "unmap": true, 00:23:50.565 "write_zeroes": true, 00:23:50.565 "flush": true, 00:23:50.565 "reset": true, 00:23:50.565 "compare": false, 00:23:50.565 "compare_and_write": false, 00:23:50.565 "abort": true, 00:23:50.565 "nvme_admin": false, 00:23:50.565 "nvme_io": false 00:23:50.565 }, 00:23:50.565 "memory_domains": [ 00:23:50.565 { 00:23:50.565 "dma_device_id": "system", 00:23:50.565 "dma_device_type": 1 00:23:50.565 }, 00:23:50.565 { 00:23:50.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:50.565 "dma_device_type": 2 00:23:50.565 } 00:23:50.565 ], 00:23:50.565 "driver_specific": {} 00:23:50.565 }' 00:23:50.565 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.565 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:50.822 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:51.079 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:51.079 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:51.079 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:51.079 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:23:51.079 12:05:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:51.338 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:51.338 "name": "BaseBdev4", 00:23:51.338 "aliases": [ 00:23:51.338 "0743f83f-f955-47a8-ad56-61a3da5afaf1" 00:23:51.338 ], 00:23:51.338 "product_name": "Malloc disk", 00:23:51.338 "block_size": 512, 00:23:51.338 "num_blocks": 65536, 00:23:51.338 "uuid": "0743f83f-f955-47a8-ad56-61a3da5afaf1", 00:23:51.338 "assigned_rate_limits": { 00:23:51.338 "rw_ios_per_sec": 0, 00:23:51.338 "rw_mbytes_per_sec": 0, 00:23:51.338 "r_mbytes_per_sec": 0, 00:23:51.338 "w_mbytes_per_sec": 0 00:23:51.338 }, 00:23:51.338 "claimed": true, 00:23:51.338 "claim_type": "exclusive_write", 00:23:51.338 "zoned": false, 00:23:51.338 "supported_io_types": { 00:23:51.338 "read": true, 00:23:51.338 "write": true, 00:23:51.338 "unmap": true, 00:23:51.338 "write_zeroes": true, 00:23:51.338 "flush": true, 00:23:51.338 "reset": true, 00:23:51.338 "compare": false, 00:23:51.338 "compare_and_write": false, 00:23:51.338 "abort": true, 00:23:51.338 "nvme_admin": false, 00:23:51.338 "nvme_io": false 00:23:51.338 }, 00:23:51.338 "memory_domains": [ 00:23:51.338 { 00:23:51.338 "dma_device_id": "system", 00:23:51.338 "dma_device_type": 1 00:23:51.338 }, 00:23:51.338 { 00:23:51.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:51.338 "dma_device_type": 2 00:23:51.338 } 00:23:51.338 ], 00:23:51.338 "driver_specific": {} 00:23:51.338 }' 00:23:51.338 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:51.338 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:51.338 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:51.338 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:51.338 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:51.596 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:51.854 [2024-07-21 12:05:50.639217] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:51.854 [2024-07-21 12:05:50.639540] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:51.854 [2024-07-21 12:05:50.639752] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:51.854 [2024-07-21 12:05:50.639945] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:51.854 [2024-07-21 12:05:50.640056] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 148906 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 148906 ']' 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 148906 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 148906 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 148906' 00:23:51.854 killing process with pid 148906 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 148906 00:23:51.854 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 148906 00:23:51.854 [2024-07-21 12:05:50.680863] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:52.113 [2024-07-21 12:05:50.724573] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:52.113 ************************************ 00:23:52.113 END TEST raid_state_function_test_sb 00:23:52.113 ************************************ 00:23:52.113 12:05:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:23:52.113 00:23:52.113 real 0m34.238s 00:23:52.113 user 1m5.241s 00:23:52.113 sys 0m3.963s 00:23:52.113 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:52.113 12:05:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:23:52.371 12:05:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:23:52.371 12:05:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:52.371 12:05:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:52.371 12:05:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:23:52.371 ************************************ 00:23:52.371 START TEST raid_superblock_test 00:23:52.371 ************************************ 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test concat 4 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=150015 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 150015 /var/tmp/spdk-raid.sock 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 150015 ']' 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:52.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:52.371 12:05:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:52.371 [2024-07-21 12:05:51.087991] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:52.371 [2024-07-21 12:05:51.088715] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150015 ] 00:23:52.630 [2024-07-21 12:05:51.249660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.630 [2024-07-21 12:05:51.345199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.630 [2024-07-21 12:05:51.404380] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:53.196 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:53.454 malloc1 00:23:53.454 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:53.712 [2024-07-21 12:05:52.524669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:53.712 [2024-07-21 12:05:52.525101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:53.712 [2024-07-21 12:05:52.525338] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:23:53.712 [2024-07-21 12:05:52.525522] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:53.712 [2024-07-21 12:05:52.528587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:53.712 [2024-07-21 12:05:52.528805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:53.712 pt1 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:53.713 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:53.971 malloc2 00:23:53.971 12:05:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:54.230 [2024-07-21 12:05:53.072734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:54.230 [2024-07-21 12:05:53.073127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.230 [2024-07-21 12:05:53.073368] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:23:54.230 [2024-07-21 12:05:53.073546] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.230 [2024-07-21 12:05:53.076328] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.230 [2024-07-21 12:05:53.076509] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:54.230 pt2 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:54.230 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:54.488 malloc3 00:23:54.488 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:54.746 [2024-07-21 12:05:53.573246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:54.746 [2024-07-21 12:05:53.573544] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:54.746 [2024-07-21 12:05:53.573731] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:23:54.746 [2024-07-21 12:05:53.573941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:54.746 [2024-07-21 12:05:53.576719] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:54.746 [2024-07-21 12:05:53.576925] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:54.746 pt3 00:23:54.746 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:54.746 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:54.747 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:23:54.747 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:23:54.747 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:54.747 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:54.747 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:23:54.747 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:54.747 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:55.004 malloc4 00:23:55.004 12:05:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:55.262 [2024-07-21 12:05:54.084652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:55.262 [2024-07-21 12:05:54.085009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:55.262 [2024-07-21 12:05:54.085215] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:23:55.262 [2024-07-21 12:05:54.085399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:55.262 [2024-07-21 12:05:54.088174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:55.262 [2024-07-21 12:05:54.088382] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:55.262 pt4 00:23:55.262 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:23:55.262 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:23:55.262 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:55.520 [2024-07-21 12:05:54.316849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:55.520 [2024-07-21 12:05:54.319381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:55.520 [2024-07-21 12:05:54.319613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:55.520 [2024-07-21 12:05:54.319808] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:55.520 [2024-07-21 12:05:54.320192] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:23:55.520 [2024-07-21 12:05:54.320350] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:23:55.520 [2024-07-21 12:05:54.320595] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:23:55.520 [2024-07-21 12:05:54.321180] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:23:55.520 [2024-07-21 12:05:54.321322] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:23:55.520 [2024-07-21 12:05:54.321688] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:23:55.520 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:23:55.521 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:23:55.521 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.521 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.778 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:23:55.778 "name": "raid_bdev1", 00:23:55.778 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:23:55.778 "strip_size_kb": 64, 00:23:55.778 "state": "online", 00:23:55.778 "raid_level": "concat", 00:23:55.778 "superblock": true, 00:23:55.778 "num_base_bdevs": 4, 00:23:55.778 "num_base_bdevs_discovered": 4, 00:23:55.778 "num_base_bdevs_operational": 4, 00:23:55.778 "base_bdevs_list": [ 00:23:55.778 { 00:23:55.778 "name": "pt1", 00:23:55.778 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:23:55.778 "is_configured": true, 00:23:55.778 "data_offset": 2048, 00:23:55.778 "data_size": 63488 00:23:55.778 }, 00:23:55.778 { 00:23:55.778 "name": "pt2", 00:23:55.778 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:23:55.778 "is_configured": true, 00:23:55.778 "data_offset": 2048, 00:23:55.778 "data_size": 63488 00:23:55.778 }, 00:23:55.778 { 00:23:55.778 "name": "pt3", 00:23:55.778 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:23:55.778 "is_configured": true, 00:23:55.778 "data_offset": 2048, 00:23:55.778 "data_size": 63488 00:23:55.778 }, 00:23:55.778 { 00:23:55.778 "name": "pt4", 00:23:55.778 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:23:55.778 "is_configured": true, 00:23:55.778 "data_offset": 2048, 00:23:55.778 "data_size": 63488 00:23:55.778 } 00:23:55.778 ] 00:23:55.778 }' 00:23:55.778 12:05:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:23:55.778 12:05:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:23:56.710 [2024-07-21 12:05:55.430164] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:56.710 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:23:56.710 "name": "raid_bdev1", 00:23:56.710 "aliases": [ 00:23:56.710 "89bdb965-8934-4f9e-a5e0-1b35160ea351" 00:23:56.710 ], 00:23:56.710 "product_name": "Raid Volume", 00:23:56.710 "block_size": 512, 00:23:56.710 "num_blocks": 253952, 00:23:56.710 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:23:56.710 "assigned_rate_limits": { 00:23:56.710 "rw_ios_per_sec": 0, 00:23:56.710 "rw_mbytes_per_sec": 0, 00:23:56.710 "r_mbytes_per_sec": 0, 00:23:56.710 "w_mbytes_per_sec": 0 00:23:56.710 }, 00:23:56.710 "claimed": false, 00:23:56.710 "zoned": false, 00:23:56.710 "supported_io_types": { 00:23:56.710 "read": true, 00:23:56.710 "write": true, 00:23:56.710 "unmap": true, 00:23:56.710 "write_zeroes": true, 00:23:56.710 "flush": true, 00:23:56.710 "reset": true, 00:23:56.710 "compare": false, 00:23:56.710 "compare_and_write": false, 00:23:56.710 "abort": false, 00:23:56.710 "nvme_admin": false, 00:23:56.710 "nvme_io": false 00:23:56.710 }, 00:23:56.710 "memory_domains": [ 00:23:56.710 { 00:23:56.710 "dma_device_id": "system", 00:23:56.710 "dma_device_type": 1 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.710 "dma_device_type": 2 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "dma_device_id": "system", 00:23:56.710 "dma_device_type": 1 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.710 "dma_device_type": 2 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "dma_device_id": "system", 00:23:56.710 "dma_device_type": 1 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.710 "dma_device_type": 2 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "dma_device_id": "system", 00:23:56.710 "dma_device_type": 1 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.710 "dma_device_type": 2 00:23:56.710 } 00:23:56.710 ], 00:23:56.710 "driver_specific": { 00:23:56.710 "raid": { 00:23:56.710 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:23:56.710 "strip_size_kb": 64, 00:23:56.710 "state": "online", 00:23:56.710 "raid_level": "concat", 00:23:56.710 "superblock": true, 00:23:56.710 "num_base_bdevs": 4, 00:23:56.710 "num_base_bdevs_discovered": 4, 00:23:56.710 "num_base_bdevs_operational": 4, 00:23:56.710 "base_bdevs_list": [ 00:23:56.710 { 00:23:56.710 "name": "pt1", 00:23:56.710 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:23:56.710 "is_configured": true, 00:23:56.710 "data_offset": 2048, 00:23:56.710 "data_size": 63488 00:23:56.710 }, 00:23:56.710 { 00:23:56.710 "name": "pt2", 00:23:56.710 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:23:56.710 "is_configured": true, 00:23:56.710 "data_offset": 2048, 00:23:56.711 "data_size": 63488 00:23:56.711 }, 00:23:56.711 { 00:23:56.711 "name": "pt3", 00:23:56.711 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:23:56.711 "is_configured": true, 00:23:56.711 "data_offset": 2048, 00:23:56.711 "data_size": 63488 00:23:56.711 }, 00:23:56.711 { 00:23:56.711 "name": "pt4", 00:23:56.711 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:23:56.711 "is_configured": true, 00:23:56.711 "data_offset": 2048, 00:23:56.711 "data_size": 63488 00:23:56.711 } 00:23:56.711 ] 00:23:56.711 } 00:23:56.711 } 00:23:56.711 }' 00:23:56.711 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:23:56.711 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:23:56.711 pt2 00:23:56.711 pt3 00:23:56.711 pt4' 00:23:56.711 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:56.711 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:23:56.711 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:56.968 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:56.968 "name": "pt1", 00:23:56.968 "aliases": [ 00:23:56.968 "f36bbef8-0e40-55f2-a020-690f13fa10a0" 00:23:56.968 ], 00:23:56.968 "product_name": "passthru", 00:23:56.968 "block_size": 512, 00:23:56.968 "num_blocks": 65536, 00:23:56.968 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:23:56.968 "assigned_rate_limits": { 00:23:56.968 "rw_ios_per_sec": 0, 00:23:56.968 "rw_mbytes_per_sec": 0, 00:23:56.968 "r_mbytes_per_sec": 0, 00:23:56.969 "w_mbytes_per_sec": 0 00:23:56.969 }, 00:23:56.969 "claimed": true, 00:23:56.969 "claim_type": "exclusive_write", 00:23:56.969 "zoned": false, 00:23:56.969 "supported_io_types": { 00:23:56.969 "read": true, 00:23:56.969 "write": true, 00:23:56.969 "unmap": true, 00:23:56.969 "write_zeroes": true, 00:23:56.969 "flush": true, 00:23:56.969 "reset": true, 00:23:56.969 "compare": false, 00:23:56.969 "compare_and_write": false, 00:23:56.969 "abort": true, 00:23:56.969 "nvme_admin": false, 00:23:56.969 "nvme_io": false 00:23:56.969 }, 00:23:56.969 "memory_domains": [ 00:23:56.969 { 00:23:56.969 "dma_device_id": "system", 00:23:56.969 "dma_device_type": 1 00:23:56.969 }, 00:23:56.969 { 00:23:56.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:56.969 "dma_device_type": 2 00:23:56.969 } 00:23:56.969 ], 00:23:56.969 "driver_specific": { 00:23:56.969 "passthru": { 00:23:56.969 "name": "pt1", 00:23:56.969 "base_bdev_name": "malloc1" 00:23:56.969 } 00:23:56.969 } 00:23:56.969 }' 00:23:56.969 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.969 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:56.969 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:56.969 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.227 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.227 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.227 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.227 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.227 12:05:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.227 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.227 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.484 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.484 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.484 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:23:57.484 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.484 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:57.484 "name": "pt2", 00:23:57.484 "aliases": [ 00:23:57.484 "981ea11b-3371-512c-82d3-5b581f6b75d3" 00:23:57.484 ], 00:23:57.484 "product_name": "passthru", 00:23:57.484 "block_size": 512, 00:23:57.484 "num_blocks": 65536, 00:23:57.484 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:23:57.484 "assigned_rate_limits": { 00:23:57.484 "rw_ios_per_sec": 0, 00:23:57.484 "rw_mbytes_per_sec": 0, 00:23:57.484 "r_mbytes_per_sec": 0, 00:23:57.484 "w_mbytes_per_sec": 0 00:23:57.484 }, 00:23:57.484 "claimed": true, 00:23:57.484 "claim_type": "exclusive_write", 00:23:57.484 "zoned": false, 00:23:57.484 "supported_io_types": { 00:23:57.484 "read": true, 00:23:57.484 "write": true, 00:23:57.484 "unmap": true, 00:23:57.484 "write_zeroes": true, 00:23:57.484 "flush": true, 00:23:57.484 "reset": true, 00:23:57.484 "compare": false, 00:23:57.484 "compare_and_write": false, 00:23:57.484 "abort": true, 00:23:57.484 "nvme_admin": false, 00:23:57.484 "nvme_io": false 00:23:57.484 }, 00:23:57.484 "memory_domains": [ 00:23:57.484 { 00:23:57.484 "dma_device_id": "system", 00:23:57.484 "dma_device_type": 1 00:23:57.484 }, 00:23:57.484 { 00:23:57.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:57.484 "dma_device_type": 2 00:23:57.484 } 00:23:57.484 ], 00:23:57.484 "driver_specific": { 00:23:57.484 "passthru": { 00:23:57.484 "name": "pt2", 00:23:57.484 "base_bdev_name": "malloc2" 00:23:57.484 } 00:23:57.484 } 00:23:57.484 }' 00:23:57.484 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.742 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:57.742 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:57.742 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.742 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:57.742 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:57.742 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.742 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:57.999 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:57.999 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.999 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:57.999 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:57.999 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:57.999 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:57.999 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:23:58.257 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.257 "name": "pt3", 00:23:58.257 "aliases": [ 00:23:58.257 "344edba2-554d-5902-82b4-9519d7be7201" 00:23:58.257 ], 00:23:58.257 "product_name": "passthru", 00:23:58.257 "block_size": 512, 00:23:58.257 "num_blocks": 65536, 00:23:58.257 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:23:58.257 "assigned_rate_limits": { 00:23:58.257 "rw_ios_per_sec": 0, 00:23:58.257 "rw_mbytes_per_sec": 0, 00:23:58.257 "r_mbytes_per_sec": 0, 00:23:58.257 "w_mbytes_per_sec": 0 00:23:58.257 }, 00:23:58.257 "claimed": true, 00:23:58.257 "claim_type": "exclusive_write", 00:23:58.257 "zoned": false, 00:23:58.257 "supported_io_types": { 00:23:58.257 "read": true, 00:23:58.257 "write": true, 00:23:58.257 "unmap": true, 00:23:58.257 "write_zeroes": true, 00:23:58.257 "flush": true, 00:23:58.257 "reset": true, 00:23:58.257 "compare": false, 00:23:58.257 "compare_and_write": false, 00:23:58.257 "abort": true, 00:23:58.257 "nvme_admin": false, 00:23:58.257 "nvme_io": false 00:23:58.257 }, 00:23:58.257 "memory_domains": [ 00:23:58.257 { 00:23:58.257 "dma_device_id": "system", 00:23:58.257 "dma_device_type": 1 00:23:58.257 }, 00:23:58.257 { 00:23:58.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.257 "dma_device_type": 2 00:23:58.257 } 00:23:58.257 ], 00:23:58.257 "driver_specific": { 00:23:58.257 "passthru": { 00:23:58.257 "name": "pt3", 00:23:58.257 "base_bdev_name": "malloc3" 00:23:58.257 } 00:23:58.257 } 00:23:58.257 }' 00:23:58.257 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.257 12:05:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.257 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:58.257 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.257 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:23:58.514 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:23:58.771 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:23:58.771 "name": "pt4", 00:23:58.771 "aliases": [ 00:23:58.771 "7e6af9c3-9511-5652-af43-55b1cac7ba87" 00:23:58.771 ], 00:23:58.771 "product_name": "passthru", 00:23:58.771 "block_size": 512, 00:23:58.771 "num_blocks": 65536, 00:23:58.771 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:23:58.771 "assigned_rate_limits": { 00:23:58.771 "rw_ios_per_sec": 0, 00:23:58.771 "rw_mbytes_per_sec": 0, 00:23:58.771 "r_mbytes_per_sec": 0, 00:23:58.771 "w_mbytes_per_sec": 0 00:23:58.771 }, 00:23:58.771 "claimed": true, 00:23:58.771 "claim_type": "exclusive_write", 00:23:58.771 "zoned": false, 00:23:58.771 "supported_io_types": { 00:23:58.771 "read": true, 00:23:58.771 "write": true, 00:23:58.771 "unmap": true, 00:23:58.771 "write_zeroes": true, 00:23:58.771 "flush": true, 00:23:58.771 "reset": true, 00:23:58.771 "compare": false, 00:23:58.771 "compare_and_write": false, 00:23:58.771 "abort": true, 00:23:58.771 "nvme_admin": false, 00:23:58.771 "nvme_io": false 00:23:58.771 }, 00:23:58.771 "memory_domains": [ 00:23:58.771 { 00:23:58.771 "dma_device_id": "system", 00:23:58.771 "dma_device_type": 1 00:23:58.771 }, 00:23:58.771 { 00:23:58.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:58.771 "dma_device_type": 2 00:23:58.771 } 00:23:58.771 ], 00:23:58.771 "driver_specific": { 00:23:58.771 "passthru": { 00:23:58.771 "name": "pt4", 00:23:58.771 "base_bdev_name": "malloc4" 00:23:58.771 } 00:23:58.771 } 00:23:58.771 }' 00:23:58.771 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.771 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:23:58.771 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:23:58.771 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:59.028 12:05:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:23:59.286 [2024-07-21 12:05:58.138728] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:59.543 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=89bdb965-8934-4f9e-a5e0-1b35160ea351 00:23:59.543 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 89bdb965-8934-4f9e-a5e0-1b35160ea351 ']' 00:23:59.543 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:59.801 [2024-07-21 12:05:58.410520] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:59.801 [2024-07-21 12:05:58.410746] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:59.801 [2024-07-21 12:05:58.410977] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:59.801 [2024-07-21 12:05:58.411182] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:59.801 [2024-07-21 12:05:58.411311] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:23:59.801 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:59.801 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:23:59.801 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:23:59.801 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:23:59.801 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:23:59.801 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:00.059 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:00.059 12:05:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:00.317 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:00.317 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:00.575 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:24:00.575 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:00.833 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:00.833 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:01.092 12:05:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:01.350 [2024-07-21 12:06:00.066821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:01.350 [2024-07-21 12:06:00.069362] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:01.350 [2024-07-21 12:06:00.069582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:01.350 [2024-07-21 12:06:00.069682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:01.350 [2024-07-21 12:06:00.069838] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:24:01.350 [2024-07-21 12:06:00.070049] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:24:01.350 [2024-07-21 12:06:00.070233] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:24:01.350 [2024-07-21 12:06:00.070421] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:24:01.350 [2024-07-21 12:06:00.070598] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:01.350 [2024-07-21 12:06:00.070725] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:24:01.350 request: 00:24:01.350 { 00:24:01.350 "name": "raid_bdev1", 00:24:01.350 "raid_level": "concat", 00:24:01.350 "base_bdevs": [ 00:24:01.350 "malloc1", 00:24:01.350 "malloc2", 00:24:01.350 "malloc3", 00:24:01.350 "malloc4" 00:24:01.350 ], 00:24:01.350 "superblock": false, 00:24:01.350 "strip_size_kb": 64, 00:24:01.350 "method": "bdev_raid_create", 00:24:01.350 "req_id": 1 00:24:01.351 } 00:24:01.351 Got JSON-RPC error response 00:24:01.351 response: 00:24:01.351 { 00:24:01.351 "code": -17, 00:24:01.351 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:01.351 } 00:24:01.351 12:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:24:01.351 12:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:01.351 12:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:01.351 12:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:01.351 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.351 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:24:01.609 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:24:01.609 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:24:01.609 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:01.867 [2024-07-21 12:06:00.555185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:01.867 [2024-07-21 12:06:00.555601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.867 [2024-07-21 12:06:00.555770] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:24:01.867 [2024-07-21 12:06:00.555906] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.867 [2024-07-21 12:06:00.558452] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.867 [2024-07-21 12:06:00.558698] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:01.867 [2024-07-21 12:06:00.558942] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:24:01.867 [2024-07-21 12:06:00.559135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:01.867 pt1 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.867 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.125 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:02.125 "name": "raid_bdev1", 00:24:02.125 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:24:02.125 "strip_size_kb": 64, 00:24:02.125 "state": "configuring", 00:24:02.125 "raid_level": "concat", 00:24:02.125 "superblock": true, 00:24:02.125 "num_base_bdevs": 4, 00:24:02.125 "num_base_bdevs_discovered": 1, 00:24:02.125 "num_base_bdevs_operational": 4, 00:24:02.125 "base_bdevs_list": [ 00:24:02.125 { 00:24:02.125 "name": "pt1", 00:24:02.125 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:24:02.125 "is_configured": true, 00:24:02.125 "data_offset": 2048, 00:24:02.125 "data_size": 63488 00:24:02.125 }, 00:24:02.125 { 00:24:02.125 "name": null, 00:24:02.125 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:24:02.125 "is_configured": false, 00:24:02.125 "data_offset": 2048, 00:24:02.125 "data_size": 63488 00:24:02.125 }, 00:24:02.125 { 00:24:02.125 "name": null, 00:24:02.125 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:24:02.125 "is_configured": false, 00:24:02.125 "data_offset": 2048, 00:24:02.125 "data_size": 63488 00:24:02.125 }, 00:24:02.125 { 00:24:02.125 "name": null, 00:24:02.125 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:24:02.125 "is_configured": false, 00:24:02.125 "data_offset": 2048, 00:24:02.125 "data_size": 63488 00:24:02.125 } 00:24:02.125 ] 00:24:02.125 }' 00:24:02.125 12:06:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:02.125 12:06:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:02.689 12:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:24:02.689 12:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:02.945 [2024-07-21 12:06:01.771881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:02.945 [2024-07-21 12:06:01.772227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:02.945 [2024-07-21 12:06:01.772431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:24:02.945 [2024-07-21 12:06:01.772584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:02.945 [2024-07-21 12:06:01.773195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:02.945 [2024-07-21 12:06:01.773378] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:02.946 [2024-07-21 12:06:01.773670] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:02.946 [2024-07-21 12:06:01.773817] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:02.946 pt2 00:24:02.946 12:06:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:03.203 [2024-07-21 12:06:02.004004] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.203 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.461 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:03.461 "name": "raid_bdev1", 00:24:03.461 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:24:03.461 "strip_size_kb": 64, 00:24:03.461 "state": "configuring", 00:24:03.461 "raid_level": "concat", 00:24:03.461 "superblock": true, 00:24:03.461 "num_base_bdevs": 4, 00:24:03.461 "num_base_bdevs_discovered": 1, 00:24:03.461 "num_base_bdevs_operational": 4, 00:24:03.461 "base_bdevs_list": [ 00:24:03.461 { 00:24:03.461 "name": "pt1", 00:24:03.461 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:24:03.461 "is_configured": true, 00:24:03.461 "data_offset": 2048, 00:24:03.461 "data_size": 63488 00:24:03.461 }, 00:24:03.461 { 00:24:03.461 "name": null, 00:24:03.461 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:24:03.461 "is_configured": false, 00:24:03.461 "data_offset": 2048, 00:24:03.461 "data_size": 63488 00:24:03.461 }, 00:24:03.461 { 00:24:03.461 "name": null, 00:24:03.461 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:24:03.461 "is_configured": false, 00:24:03.461 "data_offset": 2048, 00:24:03.461 "data_size": 63488 00:24:03.461 }, 00:24:03.461 { 00:24:03.461 "name": null, 00:24:03.461 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:24:03.461 "is_configured": false, 00:24:03.461 "data_offset": 2048, 00:24:03.461 "data_size": 63488 00:24:03.461 } 00:24:03.461 ] 00:24:03.461 }' 00:24:03.461 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:03.461 12:06:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:04.395 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:24:04.395 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:04.395 12:06:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:04.395 [2024-07-21 12:06:03.140181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:04.395 [2024-07-21 12:06:03.140573] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.395 [2024-07-21 12:06:03.140663] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:04.395 [2024-07-21 12:06:03.140919] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.395 [2024-07-21 12:06:03.141533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.395 [2024-07-21 12:06:03.141726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:04.395 [2024-07-21 12:06:03.141950] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:24:04.395 [2024-07-21 12:06:03.142106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:04.395 pt2 00:24:04.395 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:04.395 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:04.395 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:04.659 [2024-07-21 12:06:03.376246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:04.659 [2024-07-21 12:06:03.376536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.659 [2024-07-21 12:06:03.376698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:04.659 [2024-07-21 12:06:03.376834] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.659 [2024-07-21 12:06:03.377396] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.659 [2024-07-21 12:06:03.377582] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:04.659 [2024-07-21 12:06:03.377800] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:24:04.659 [2024-07-21 12:06:03.377952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:04.659 pt3 00:24:04.659 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:04.659 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:04.659 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:04.928 [2024-07-21 12:06:03.604348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:04.928 [2024-07-21 12:06:03.604633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:04.928 [2024-07-21 12:06:03.604723] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:04.928 [2024-07-21 12:06:03.604973] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:04.928 [2024-07-21 12:06:03.605590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:04.928 [2024-07-21 12:06:03.605782] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:04.928 [2024-07-21 12:06:03.606016] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:24:04.928 [2024-07-21 12:06:03.606168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:04.928 [2024-07-21 12:06:03.606433] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:24:04.928 [2024-07-21 12:06:03.606600] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:04.928 [2024-07-21 12:06:03.606797] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:24:04.928 [2024-07-21 12:06:03.607293] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:24:04.928 [2024-07-21 12:06:03.607421] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:24:04.928 [2024-07-21 12:06:03.607639] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:04.928 pt4 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.928 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:05.186 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:05.186 "name": "raid_bdev1", 00:24:05.186 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:24:05.186 "strip_size_kb": 64, 00:24:05.186 "state": "online", 00:24:05.186 "raid_level": "concat", 00:24:05.186 "superblock": true, 00:24:05.186 "num_base_bdevs": 4, 00:24:05.186 "num_base_bdevs_discovered": 4, 00:24:05.186 "num_base_bdevs_operational": 4, 00:24:05.186 "base_bdevs_list": [ 00:24:05.186 { 00:24:05.186 "name": "pt1", 00:24:05.186 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:24:05.186 "is_configured": true, 00:24:05.186 "data_offset": 2048, 00:24:05.186 "data_size": 63488 00:24:05.186 }, 00:24:05.186 { 00:24:05.186 "name": "pt2", 00:24:05.186 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:24:05.186 "is_configured": true, 00:24:05.186 "data_offset": 2048, 00:24:05.186 "data_size": 63488 00:24:05.186 }, 00:24:05.186 { 00:24:05.186 "name": "pt3", 00:24:05.186 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:24:05.186 "is_configured": true, 00:24:05.186 "data_offset": 2048, 00:24:05.186 "data_size": 63488 00:24:05.186 }, 00:24:05.186 { 00:24:05.186 "name": "pt4", 00:24:05.186 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:24:05.186 "is_configured": true, 00:24:05.186 "data_offset": 2048, 00:24:05.186 "data_size": 63488 00:24:05.186 } 00:24:05.186 ] 00:24:05.186 }' 00:24:05.186 12:06:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:05.186 12:06:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:05.750 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:06.007 [2024-07-21 12:06:04.728868] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:06.007 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:06.007 "name": "raid_bdev1", 00:24:06.007 "aliases": [ 00:24:06.007 "89bdb965-8934-4f9e-a5e0-1b35160ea351" 00:24:06.007 ], 00:24:06.007 "product_name": "Raid Volume", 00:24:06.007 "block_size": 512, 00:24:06.007 "num_blocks": 253952, 00:24:06.007 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:24:06.007 "assigned_rate_limits": { 00:24:06.007 "rw_ios_per_sec": 0, 00:24:06.007 "rw_mbytes_per_sec": 0, 00:24:06.007 "r_mbytes_per_sec": 0, 00:24:06.007 "w_mbytes_per_sec": 0 00:24:06.007 }, 00:24:06.007 "claimed": false, 00:24:06.007 "zoned": false, 00:24:06.007 "supported_io_types": { 00:24:06.007 "read": true, 00:24:06.007 "write": true, 00:24:06.007 "unmap": true, 00:24:06.007 "write_zeroes": true, 00:24:06.007 "flush": true, 00:24:06.007 "reset": true, 00:24:06.007 "compare": false, 00:24:06.007 "compare_and_write": false, 00:24:06.007 "abort": false, 00:24:06.007 "nvme_admin": false, 00:24:06.007 "nvme_io": false 00:24:06.007 }, 00:24:06.007 "memory_domains": [ 00:24:06.007 { 00:24:06.007 "dma_device_id": "system", 00:24:06.007 "dma_device_type": 1 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.007 "dma_device_type": 2 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "dma_device_id": "system", 00:24:06.007 "dma_device_type": 1 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.007 "dma_device_type": 2 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "dma_device_id": "system", 00:24:06.007 "dma_device_type": 1 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.007 "dma_device_type": 2 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "dma_device_id": "system", 00:24:06.007 "dma_device_type": 1 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.007 "dma_device_type": 2 00:24:06.007 } 00:24:06.007 ], 00:24:06.007 "driver_specific": { 00:24:06.007 "raid": { 00:24:06.007 "uuid": "89bdb965-8934-4f9e-a5e0-1b35160ea351", 00:24:06.007 "strip_size_kb": 64, 00:24:06.007 "state": "online", 00:24:06.007 "raid_level": "concat", 00:24:06.007 "superblock": true, 00:24:06.007 "num_base_bdevs": 4, 00:24:06.007 "num_base_bdevs_discovered": 4, 00:24:06.007 "num_base_bdevs_operational": 4, 00:24:06.007 "base_bdevs_list": [ 00:24:06.007 { 00:24:06.007 "name": "pt1", 00:24:06.007 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:24:06.007 "is_configured": true, 00:24:06.007 "data_offset": 2048, 00:24:06.007 "data_size": 63488 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "name": "pt2", 00:24:06.007 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:24:06.007 "is_configured": true, 00:24:06.007 "data_offset": 2048, 00:24:06.007 "data_size": 63488 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "name": "pt3", 00:24:06.007 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:24:06.007 "is_configured": true, 00:24:06.007 "data_offset": 2048, 00:24:06.007 "data_size": 63488 00:24:06.007 }, 00:24:06.007 { 00:24:06.007 "name": "pt4", 00:24:06.007 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:24:06.007 "is_configured": true, 00:24:06.007 "data_offset": 2048, 00:24:06.007 "data_size": 63488 00:24:06.007 } 00:24:06.007 ] 00:24:06.007 } 00:24:06.007 } 00:24:06.007 }' 00:24:06.007 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:06.007 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:24:06.007 pt2 00:24:06.007 pt3 00:24:06.007 pt4' 00:24:06.007 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:06.007 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:24:06.007 12:06:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:06.263 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:06.263 "name": "pt1", 00:24:06.263 "aliases": [ 00:24:06.263 "f36bbef8-0e40-55f2-a020-690f13fa10a0" 00:24:06.263 ], 00:24:06.263 "product_name": "passthru", 00:24:06.263 "block_size": 512, 00:24:06.263 "num_blocks": 65536, 00:24:06.263 "uuid": "f36bbef8-0e40-55f2-a020-690f13fa10a0", 00:24:06.263 "assigned_rate_limits": { 00:24:06.263 "rw_ios_per_sec": 0, 00:24:06.263 "rw_mbytes_per_sec": 0, 00:24:06.263 "r_mbytes_per_sec": 0, 00:24:06.263 "w_mbytes_per_sec": 0 00:24:06.263 }, 00:24:06.263 "claimed": true, 00:24:06.263 "claim_type": "exclusive_write", 00:24:06.263 "zoned": false, 00:24:06.263 "supported_io_types": { 00:24:06.263 "read": true, 00:24:06.263 "write": true, 00:24:06.263 "unmap": true, 00:24:06.263 "write_zeroes": true, 00:24:06.263 "flush": true, 00:24:06.263 "reset": true, 00:24:06.263 "compare": false, 00:24:06.263 "compare_and_write": false, 00:24:06.263 "abort": true, 00:24:06.263 "nvme_admin": false, 00:24:06.263 "nvme_io": false 00:24:06.263 }, 00:24:06.263 "memory_domains": [ 00:24:06.263 { 00:24:06.263 "dma_device_id": "system", 00:24:06.263 "dma_device_type": 1 00:24:06.263 }, 00:24:06.263 { 00:24:06.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:06.263 "dma_device_type": 2 00:24:06.263 } 00:24:06.263 ], 00:24:06.263 "driver_specific": { 00:24:06.263 "passthru": { 00:24:06.263 "name": "pt1", 00:24:06.263 "base_bdev_name": "malloc1" 00:24:06.263 } 00:24:06.263 } 00:24:06.263 }' 00:24:06.263 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:06.263 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:06.520 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.777 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:06.777 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:06.777 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:06.777 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:24:06.777 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:07.034 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:07.034 "name": "pt2", 00:24:07.034 "aliases": [ 00:24:07.034 "981ea11b-3371-512c-82d3-5b581f6b75d3" 00:24:07.034 ], 00:24:07.034 "product_name": "passthru", 00:24:07.034 "block_size": 512, 00:24:07.034 "num_blocks": 65536, 00:24:07.034 "uuid": "981ea11b-3371-512c-82d3-5b581f6b75d3", 00:24:07.034 "assigned_rate_limits": { 00:24:07.034 "rw_ios_per_sec": 0, 00:24:07.034 "rw_mbytes_per_sec": 0, 00:24:07.034 "r_mbytes_per_sec": 0, 00:24:07.034 "w_mbytes_per_sec": 0 00:24:07.034 }, 00:24:07.034 "claimed": true, 00:24:07.034 "claim_type": "exclusive_write", 00:24:07.034 "zoned": false, 00:24:07.034 "supported_io_types": { 00:24:07.034 "read": true, 00:24:07.034 "write": true, 00:24:07.034 "unmap": true, 00:24:07.034 "write_zeroes": true, 00:24:07.034 "flush": true, 00:24:07.034 "reset": true, 00:24:07.034 "compare": false, 00:24:07.034 "compare_and_write": false, 00:24:07.034 "abort": true, 00:24:07.034 "nvme_admin": false, 00:24:07.034 "nvme_io": false 00:24:07.034 }, 00:24:07.034 "memory_domains": [ 00:24:07.034 { 00:24:07.034 "dma_device_id": "system", 00:24:07.034 "dma_device_type": 1 00:24:07.034 }, 00:24:07.034 { 00:24:07.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.034 "dma_device_type": 2 00:24:07.034 } 00:24:07.034 ], 00:24:07.034 "driver_specific": { 00:24:07.034 "passthru": { 00:24:07.034 "name": "pt2", 00:24:07.034 "base_bdev_name": "malloc2" 00:24:07.034 } 00:24:07.034 } 00:24:07.034 }' 00:24:07.034 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.034 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.034 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:07.034 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.034 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.291 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:07.291 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.291 12:06:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.291 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:07.291 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.291 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:07.291 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:07.291 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:07.291 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:24:07.291 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:07.548 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:07.548 "name": "pt3", 00:24:07.548 "aliases": [ 00:24:07.548 "344edba2-554d-5902-82b4-9519d7be7201" 00:24:07.548 ], 00:24:07.548 "product_name": "passthru", 00:24:07.548 "block_size": 512, 00:24:07.548 "num_blocks": 65536, 00:24:07.548 "uuid": "344edba2-554d-5902-82b4-9519d7be7201", 00:24:07.548 "assigned_rate_limits": { 00:24:07.548 "rw_ios_per_sec": 0, 00:24:07.548 "rw_mbytes_per_sec": 0, 00:24:07.548 "r_mbytes_per_sec": 0, 00:24:07.548 "w_mbytes_per_sec": 0 00:24:07.548 }, 00:24:07.548 "claimed": true, 00:24:07.548 "claim_type": "exclusive_write", 00:24:07.548 "zoned": false, 00:24:07.548 "supported_io_types": { 00:24:07.548 "read": true, 00:24:07.548 "write": true, 00:24:07.548 "unmap": true, 00:24:07.548 "write_zeroes": true, 00:24:07.548 "flush": true, 00:24:07.548 "reset": true, 00:24:07.548 "compare": false, 00:24:07.548 "compare_and_write": false, 00:24:07.548 "abort": true, 00:24:07.548 "nvme_admin": false, 00:24:07.548 "nvme_io": false 00:24:07.548 }, 00:24:07.548 "memory_domains": [ 00:24:07.548 { 00:24:07.548 "dma_device_id": "system", 00:24:07.548 "dma_device_type": 1 00:24:07.548 }, 00:24:07.548 { 00:24:07.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:07.548 "dma_device_type": 2 00:24:07.548 } 00:24:07.548 ], 00:24:07.548 "driver_specific": { 00:24:07.548 "passthru": { 00:24:07.548 "name": "pt3", 00:24:07.548 "base_bdev_name": "malloc3" 00:24:07.548 } 00:24:07.548 } 00:24:07.548 }' 00:24:07.548 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.548 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:07.805 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.062 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.062 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:08.062 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:08.062 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:24:08.062 12:06:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:08.318 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:08.318 "name": "pt4", 00:24:08.318 "aliases": [ 00:24:08.318 "7e6af9c3-9511-5652-af43-55b1cac7ba87" 00:24:08.318 ], 00:24:08.318 "product_name": "passthru", 00:24:08.318 "block_size": 512, 00:24:08.318 "num_blocks": 65536, 00:24:08.318 "uuid": "7e6af9c3-9511-5652-af43-55b1cac7ba87", 00:24:08.318 "assigned_rate_limits": { 00:24:08.318 "rw_ios_per_sec": 0, 00:24:08.318 "rw_mbytes_per_sec": 0, 00:24:08.318 "r_mbytes_per_sec": 0, 00:24:08.318 "w_mbytes_per_sec": 0 00:24:08.318 }, 00:24:08.318 "claimed": true, 00:24:08.318 "claim_type": "exclusive_write", 00:24:08.318 "zoned": false, 00:24:08.318 "supported_io_types": { 00:24:08.318 "read": true, 00:24:08.318 "write": true, 00:24:08.318 "unmap": true, 00:24:08.318 "write_zeroes": true, 00:24:08.318 "flush": true, 00:24:08.318 "reset": true, 00:24:08.318 "compare": false, 00:24:08.318 "compare_and_write": false, 00:24:08.318 "abort": true, 00:24:08.318 "nvme_admin": false, 00:24:08.318 "nvme_io": false 00:24:08.318 }, 00:24:08.318 "memory_domains": [ 00:24:08.318 { 00:24:08.318 "dma_device_id": "system", 00:24:08.318 "dma_device_type": 1 00:24:08.318 }, 00:24:08.318 { 00:24:08.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:08.318 "dma_device_type": 2 00:24:08.318 } 00:24:08.318 ], 00:24:08.318 "driver_specific": { 00:24:08.318 "passthru": { 00:24:08.318 "name": "pt4", 00:24:08.318 "base_bdev_name": "malloc4" 00:24:08.318 } 00:24:08.318 } 00:24:08.318 }' 00:24:08.318 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.318 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:08.318 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:08.318 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.318 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:08.575 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:24:08.833 [2024-07-21 12:06:07.675592] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:08.833 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 89bdb965-8934-4f9e-a5e0-1b35160ea351 '!=' 89bdb965-8934-4f9e-a5e0-1b35160ea351 ']' 00:24:08.833 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 150015 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 150015 ']' 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 150015 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150015 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150015' 00:24:09.091 killing process with pid 150015 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 150015 00:24:09.091 12:06:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 150015 00:24:09.091 [2024-07-21 12:06:07.726198] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:09.091 [2024-07-21 12:06:07.726295] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:09.091 [2024-07-21 12:06:07.726375] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:09.091 [2024-07-21 12:06:07.726611] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:24:09.091 [2024-07-21 12:06:07.774897] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:09.349 12:06:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:24:09.349 00:24:09.349 real 0m16.991s 00:24:09.349 user 0m31.806s 00:24:09.349 sys 0m1.970s 00:24:09.349 12:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:09.349 12:06:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.349 ************************************ 00:24:09.349 END TEST raid_superblock_test 00:24:09.349 ************************************ 00:24:09.349 12:06:08 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:24:09.349 12:06:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:09.349 12:06:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:09.349 12:06:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:09.349 ************************************ 00:24:09.349 START TEST raid_read_error_test 00:24:09.349 ************************************ 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 read 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.I0bTsDfE9b 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=150559 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 150559 /var/tmp/spdk-raid.sock 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 150559 ']' 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:09.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:09.349 12:06:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:09.349 [2024-07-21 12:06:08.150585] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:09.349 [2024-07-21 12:06:08.150988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150559 ] 00:24:09.607 [2024-07-21 12:06:08.305294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.607 [2024-07-21 12:06:08.400521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.607 [2024-07-21 12:06:08.456909] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:10.539 12:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:10.539 12:06:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:24:10.539 12:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:10.539 12:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:10.539 BaseBdev1_malloc 00:24:10.796 12:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:11.053 true 00:24:11.053 12:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:11.053 [2024-07-21 12:06:09.892408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:11.053 [2024-07-21 12:06:09.892725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.053 [2024-07-21 12:06:09.892905] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:11.053 [2024-07-21 12:06:09.893106] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.053 [2024-07-21 12:06:09.896060] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.053 [2024-07-21 12:06:09.896243] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:11.053 BaseBdev1 00:24:11.053 12:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:11.053 12:06:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:11.310 BaseBdev2_malloc 00:24:11.310 12:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:11.567 true 00:24:11.567 12:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:11.824 [2024-07-21 12:06:10.592046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:11.824 [2024-07-21 12:06:10.592465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:11.824 [2024-07-21 12:06:10.592668] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:11.824 [2024-07-21 12:06:10.592825] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:11.824 [2024-07-21 12:06:10.595568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:11.824 [2024-07-21 12:06:10.595757] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:11.824 BaseBdev2 00:24:11.824 12:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:11.824 12:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:12.082 BaseBdev3_malloc 00:24:12.082 12:06:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:12.339 true 00:24:12.339 12:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:12.597 [2024-07-21 12:06:11.346646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:12.597 [2024-07-21 12:06:11.347023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:12.597 [2024-07-21 12:06:11.347118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:12.597 [2024-07-21 12:06:11.347413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:12.597 [2024-07-21 12:06:11.350190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:12.597 [2024-07-21 12:06:11.350379] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:12.597 BaseBdev3 00:24:12.597 12:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:12.597 12:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:12.854 BaseBdev4_malloc 00:24:12.854 12:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:13.112 true 00:24:13.112 12:06:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:13.369 [2024-07-21 12:06:12.058043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:13.369 [2024-07-21 12:06:12.058501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:13.369 [2024-07-21 12:06:12.058698] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:13.369 [2024-07-21 12:06:12.058889] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:13.369 [2024-07-21 12:06:12.061662] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:13.369 [2024-07-21 12:06:12.061851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:13.369 BaseBdev4 00:24:13.369 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:13.626 [2024-07-21 12:06:12.294383] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.626 [2024-07-21 12:06:12.296931] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:13.626 [2024-07-21 12:06:12.297202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:13.626 [2024-07-21 12:06:12.297458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:13.626 [2024-07-21 12:06:12.297932] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:24:13.626 [2024-07-21 12:06:12.298101] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:13.626 [2024-07-21 12:06:12.298326] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:24:13.626 [2024-07-21 12:06:12.298868] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:24:13.626 [2024-07-21 12:06:12.299006] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:24:13.626 [2024-07-21 12:06:12.299356] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.626 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.884 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:13.884 "name": "raid_bdev1", 00:24:13.884 "uuid": "3d5d5c33-97eb-4dbb-b5d8-21989b54d79c", 00:24:13.884 "strip_size_kb": 64, 00:24:13.884 "state": "online", 00:24:13.884 "raid_level": "concat", 00:24:13.884 "superblock": true, 00:24:13.884 "num_base_bdevs": 4, 00:24:13.884 "num_base_bdevs_discovered": 4, 00:24:13.884 "num_base_bdevs_operational": 4, 00:24:13.884 "base_bdevs_list": [ 00:24:13.884 { 00:24:13.884 "name": "BaseBdev1", 00:24:13.884 "uuid": "cce6bf14-40e3-5176-8219-329a3e3b74b0", 00:24:13.884 "is_configured": true, 00:24:13.884 "data_offset": 2048, 00:24:13.884 "data_size": 63488 00:24:13.884 }, 00:24:13.884 { 00:24:13.884 "name": "BaseBdev2", 00:24:13.884 "uuid": "6e9bf847-f446-55fd-b7a1-2714b9deec79", 00:24:13.884 "is_configured": true, 00:24:13.884 "data_offset": 2048, 00:24:13.884 "data_size": 63488 00:24:13.884 }, 00:24:13.884 { 00:24:13.884 "name": "BaseBdev3", 00:24:13.884 "uuid": "d70c6839-9a25-5dfd-899b-9f3424f28394", 00:24:13.884 "is_configured": true, 00:24:13.884 "data_offset": 2048, 00:24:13.884 "data_size": 63488 00:24:13.884 }, 00:24:13.884 { 00:24:13.884 "name": "BaseBdev4", 00:24:13.884 "uuid": "2e5a3756-fa81-5567-b6bd-692a8b8dde3f", 00:24:13.884 "is_configured": true, 00:24:13.884 "data_offset": 2048, 00:24:13.884 "data_size": 63488 00:24:13.884 } 00:24:13.884 ] 00:24:13.884 }' 00:24:13.884 12:06:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:13.884 12:06:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:14.448 12:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:14.448 12:06:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:14.448 [2024-07-21 12:06:13.235965] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:15.393 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.651 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.909 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:15.909 "name": "raid_bdev1", 00:24:15.909 "uuid": "3d5d5c33-97eb-4dbb-b5d8-21989b54d79c", 00:24:15.909 "strip_size_kb": 64, 00:24:15.909 "state": "online", 00:24:15.909 "raid_level": "concat", 00:24:15.909 "superblock": true, 00:24:15.909 "num_base_bdevs": 4, 00:24:15.909 "num_base_bdevs_discovered": 4, 00:24:15.909 "num_base_bdevs_operational": 4, 00:24:15.910 "base_bdevs_list": [ 00:24:15.910 { 00:24:15.910 "name": "BaseBdev1", 00:24:15.910 "uuid": "cce6bf14-40e3-5176-8219-329a3e3b74b0", 00:24:15.910 "is_configured": true, 00:24:15.910 "data_offset": 2048, 00:24:15.910 "data_size": 63488 00:24:15.910 }, 00:24:15.910 { 00:24:15.910 "name": "BaseBdev2", 00:24:15.910 "uuid": "6e9bf847-f446-55fd-b7a1-2714b9deec79", 00:24:15.910 "is_configured": true, 00:24:15.910 "data_offset": 2048, 00:24:15.910 "data_size": 63488 00:24:15.910 }, 00:24:15.910 { 00:24:15.910 "name": "BaseBdev3", 00:24:15.910 "uuid": "d70c6839-9a25-5dfd-899b-9f3424f28394", 00:24:15.910 "is_configured": true, 00:24:15.910 "data_offset": 2048, 00:24:15.910 "data_size": 63488 00:24:15.910 }, 00:24:15.910 { 00:24:15.910 "name": "BaseBdev4", 00:24:15.910 "uuid": "2e5a3756-fa81-5567-b6bd-692a8b8dde3f", 00:24:15.910 "is_configured": true, 00:24:15.910 "data_offset": 2048, 00:24:15.910 "data_size": 63488 00:24:15.910 } 00:24:15.910 ] 00:24:15.910 }' 00:24:15.910 12:06:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:15.910 12:06:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:16.844 [2024-07-21 12:06:15.571666] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:16.844 [2024-07-21 12:06:15.572002] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:16.844 [2024-07-21 12:06:15.575052] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:16.844 [2024-07-21 12:06:15.575255] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:16.844 [2024-07-21 12:06:15.575349] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:16.844 [2024-07-21 12:06:15.575520] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:24:16.844 0 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 150559 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 150559 ']' 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 150559 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150559 00:24:16.844 killing process with pid 150559 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150559' 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 150559 00:24:16.844 [2024-07-21 12:06:15.609004] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:16.844 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 150559 00:24:16.844 [2024-07-21 12:06:15.646595] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.I0bTsDfE9b 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:24:17.101 00:24:17.101 real 0m7.829s 00:24:17.101 user 0m12.844s 00:24:17.101 sys 0m1.045s 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:17.101 12:06:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.101 ************************************ 00:24:17.101 END TEST raid_read_error_test 00:24:17.101 ************************************ 00:24:17.101 12:06:15 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:24:17.101 12:06:15 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:17.101 12:06:15 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:17.101 12:06:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:17.359 ************************************ 00:24:17.359 START TEST raid_write_error_test 00:24:17.359 ************************************ 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test concat 4 write 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.qJZEKnPCqK 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=150757 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 150757 /var/tmp/spdk-raid.sock 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 150757 ']' 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:17.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:17.359 12:06:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:17.359 [2024-07-21 12:06:16.046161] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:17.359 [2024-07-21 12:06:16.046697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150757 ] 00:24:17.359 [2024-07-21 12:06:16.214579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.617 [2024-07-21 12:06:16.307205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:17.617 [2024-07-21 12:06:16.362093] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:18.183 12:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:18.183 12:06:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:24:18.183 12:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:18.183 12:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:24:18.440 BaseBdev1_malloc 00:24:18.440 12:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:24:18.698 true 00:24:18.698 12:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:24:18.956 [2024-07-21 12:06:17.737891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:24:18.956 [2024-07-21 12:06:17.738271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:18.956 [2024-07-21 12:06:17.738474] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:18.956 [2024-07-21 12:06:17.738670] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:18.956 [2024-07-21 12:06:17.741618] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:18.956 [2024-07-21 12:06:17.741816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:18.956 BaseBdev1 00:24:18.956 12:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:18.956 12:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:19.214 BaseBdev2_malloc 00:24:19.214 12:06:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:24:19.472 true 00:24:19.472 12:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:24:19.730 [2024-07-21 12:06:18.433324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:24:19.730 [2024-07-21 12:06:18.433722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:19.730 [2024-07-21 12:06:18.433914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:24:19.730 [2024-07-21 12:06:18.434078] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:19.730 [2024-07-21 12:06:18.436833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:19.730 [2024-07-21 12:06:18.437020] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:19.730 BaseBdev2 00:24:19.730 12:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:19.730 12:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:19.987 BaseBdev3_malloc 00:24:19.987 12:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:24:20.244 true 00:24:20.244 12:06:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:24:20.511 [2024-07-21 12:06:19.162489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:24:20.511 [2024-07-21 12:06:19.162922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.511 [2024-07-21 12:06:19.163102] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:24:20.511 [2024-07-21 12:06:19.163293] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.511 [2024-07-21 12:06:19.166075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.511 [2024-07-21 12:06:19.166264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:20.511 BaseBdev3 00:24:20.511 12:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:24:20.511 12:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:20.792 BaseBdev4_malloc 00:24:20.792 12:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:24:20.792 true 00:24:20.792 12:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:24:21.061 [2024-07-21 12:06:19.857650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:24:21.061 [2024-07-21 12:06:19.857947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.061 [2024-07-21 12:06:19.858142] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:24:21.061 [2024-07-21 12:06:19.858317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.061 [2024-07-21 12:06:19.861129] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.061 [2024-07-21 12:06:19.861317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:21.061 BaseBdev4 00:24:21.061 12:06:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:24:21.318 [2024-07-21 12:06:20.105899] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:21.318 [2024-07-21 12:06:20.108502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:21.318 [2024-07-21 12:06:20.108737] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:21.318 [2024-07-21 12:06:20.108939] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:21.318 [2024-07-21 12:06:20.109355] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:24:21.318 [2024-07-21 12:06:20.109490] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:24:21.318 [2024-07-21 12:06:20.109745] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:24:21.318 [2024-07-21 12:06:20.110349] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:24:21.318 [2024-07-21 12:06:20.110495] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:24:21.318 [2024-07-21 12:06:20.110859] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.318 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:21.575 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:21.575 "name": "raid_bdev1", 00:24:21.575 "uuid": "05c068a9-660e-4d3c-9bf3-fa8e630e9290", 00:24:21.575 "strip_size_kb": 64, 00:24:21.575 "state": "online", 00:24:21.575 "raid_level": "concat", 00:24:21.575 "superblock": true, 00:24:21.575 "num_base_bdevs": 4, 00:24:21.575 "num_base_bdevs_discovered": 4, 00:24:21.575 "num_base_bdevs_operational": 4, 00:24:21.575 "base_bdevs_list": [ 00:24:21.575 { 00:24:21.575 "name": "BaseBdev1", 00:24:21.575 "uuid": "66fc3e73-7425-509a-9bfb-e864e0235260", 00:24:21.575 "is_configured": true, 00:24:21.575 "data_offset": 2048, 00:24:21.575 "data_size": 63488 00:24:21.575 }, 00:24:21.575 { 00:24:21.575 "name": "BaseBdev2", 00:24:21.575 "uuid": "20980b54-7a8f-51e1-bc16-bac81d85486e", 00:24:21.575 "is_configured": true, 00:24:21.575 "data_offset": 2048, 00:24:21.575 "data_size": 63488 00:24:21.575 }, 00:24:21.575 { 00:24:21.575 "name": "BaseBdev3", 00:24:21.575 "uuid": "488cd6c4-1053-56d7-a2e7-6ac46a730edf", 00:24:21.575 "is_configured": true, 00:24:21.575 "data_offset": 2048, 00:24:21.575 "data_size": 63488 00:24:21.575 }, 00:24:21.575 { 00:24:21.575 "name": "BaseBdev4", 00:24:21.575 "uuid": "fd848661-80a7-5f28-93ea-ea88f3d206f3", 00:24:21.575 "is_configured": true, 00:24:21.575 "data_offset": 2048, 00:24:21.575 "data_size": 63488 00:24:21.575 } 00:24:21.575 ] 00:24:21.575 }' 00:24:21.575 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:21.575 12:06:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:22.145 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:24:22.145 12:06:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:24:22.402 [2024-07-21 12:06:21.071579] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:23.332 12:06:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.590 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:23.848 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:23.848 "name": "raid_bdev1", 00:24:23.848 "uuid": "05c068a9-660e-4d3c-9bf3-fa8e630e9290", 00:24:23.848 "strip_size_kb": 64, 00:24:23.848 "state": "online", 00:24:23.848 "raid_level": "concat", 00:24:23.848 "superblock": true, 00:24:23.848 "num_base_bdevs": 4, 00:24:23.848 "num_base_bdevs_discovered": 4, 00:24:23.848 "num_base_bdevs_operational": 4, 00:24:23.848 "base_bdevs_list": [ 00:24:23.848 { 00:24:23.848 "name": "BaseBdev1", 00:24:23.848 "uuid": "66fc3e73-7425-509a-9bfb-e864e0235260", 00:24:23.848 "is_configured": true, 00:24:23.848 "data_offset": 2048, 00:24:23.848 "data_size": 63488 00:24:23.848 }, 00:24:23.848 { 00:24:23.848 "name": "BaseBdev2", 00:24:23.848 "uuid": "20980b54-7a8f-51e1-bc16-bac81d85486e", 00:24:23.848 "is_configured": true, 00:24:23.848 "data_offset": 2048, 00:24:23.848 "data_size": 63488 00:24:23.848 }, 00:24:23.848 { 00:24:23.848 "name": "BaseBdev3", 00:24:23.848 "uuid": "488cd6c4-1053-56d7-a2e7-6ac46a730edf", 00:24:23.848 "is_configured": true, 00:24:23.848 "data_offset": 2048, 00:24:23.848 "data_size": 63488 00:24:23.848 }, 00:24:23.848 { 00:24:23.848 "name": "BaseBdev4", 00:24:23.848 "uuid": "fd848661-80a7-5f28-93ea-ea88f3d206f3", 00:24:23.848 "is_configured": true, 00:24:23.848 "data_offset": 2048, 00:24:23.848 "data_size": 63488 00:24:23.848 } 00:24:23.848 ] 00:24:23.848 }' 00:24:23.848 12:06:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:23.848 12:06:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.413 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:24.671 [2024-07-21 12:06:23.403114] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:24.671 [2024-07-21 12:06:23.403364] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:24.671 [2024-07-21 12:06:23.406363] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:24.671 [2024-07-21 12:06:23.406623] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:24.671 [2024-07-21 12:06:23.406801] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:24.671 [2024-07-21 12:06:23.406925] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:24:24.671 0 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 150757 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 150757 ']' 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 150757 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150757 00:24:24.671 killing process with pid 150757 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150757' 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 150757 00:24:24.671 [2024-07-21 12:06:23.442306] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:24.671 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 150757 00:24:24.671 [2024-07-21 12:06:23.482682] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.qJZEKnPCqK 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.43 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.43 != \0\.\0\0 ]] 00:24:24.939 00:24:24.939 real 0m7.790s 00:24:24.939 user 0m12.766s 00:24:24.939 sys 0m1.007s 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:24.939 12:06:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:24:24.939 ************************************ 00:24:24.939 END TEST raid_write_error_test 00:24:24.939 ************************************ 00:24:24.939 12:06:23 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:24:24.939 12:06:23 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:24:24.939 12:06:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:24.939 12:06:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:24.939 12:06:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:25.197 ************************************ 00:24:25.197 START TEST raid_state_function_test 00:24:25.197 ************************************ 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 false 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:25.197 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=150968 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 150968' 00:24:25.198 Process raid pid: 150968 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 150968 /var/tmp/spdk-raid.sock 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 150968 ']' 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:25.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:25.198 12:06:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:25.198 [2024-07-21 12:06:23.877545] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:25.198 [2024-07-21 12:06:23.877817] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.198 [2024-07-21 12:06:24.040356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.455 [2024-07-21 12:06:24.137562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.455 [2024-07-21 12:06:24.194077] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.019 12:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:26.019 12:06:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:24:26.019 12:06:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:26.277 [2024-07-21 12:06:25.045211] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:26.277 [2024-07-21 12:06:25.045325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:26.277 [2024-07-21 12:06:25.045357] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:26.277 [2024-07-21 12:06:25.045377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:26.277 [2024-07-21 12:06:25.045386] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:26.277 [2024-07-21 12:06:25.045428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:26.277 [2024-07-21 12:06:25.045438] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:26.277 [2024-07-21 12:06:25.045462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.277 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.534 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:26.534 "name": "Existed_Raid", 00:24:26.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.534 "strip_size_kb": 0, 00:24:26.534 "state": "configuring", 00:24:26.535 "raid_level": "raid1", 00:24:26.535 "superblock": false, 00:24:26.535 "num_base_bdevs": 4, 00:24:26.535 "num_base_bdevs_discovered": 0, 00:24:26.535 "num_base_bdevs_operational": 4, 00:24:26.535 "base_bdevs_list": [ 00:24:26.535 { 00:24:26.535 "name": "BaseBdev1", 00:24:26.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.535 "is_configured": false, 00:24:26.535 "data_offset": 0, 00:24:26.535 "data_size": 0 00:24:26.535 }, 00:24:26.535 { 00:24:26.535 "name": "BaseBdev2", 00:24:26.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.535 "is_configured": false, 00:24:26.535 "data_offset": 0, 00:24:26.535 "data_size": 0 00:24:26.535 }, 00:24:26.535 { 00:24:26.535 "name": "BaseBdev3", 00:24:26.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.535 "is_configured": false, 00:24:26.535 "data_offset": 0, 00:24:26.535 "data_size": 0 00:24:26.535 }, 00:24:26.535 { 00:24:26.535 "name": "BaseBdev4", 00:24:26.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.535 "is_configured": false, 00:24:26.535 "data_offset": 0, 00:24:26.535 "data_size": 0 00:24:26.535 } 00:24:26.535 ] 00:24:26.535 }' 00:24:26.535 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:26.535 12:06:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:27.467 12:06:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:27.467 [2024-07-21 12:06:26.257277] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.467 [2024-07-21 12:06:26.257347] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:24:27.467 12:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:27.725 [2024-07-21 12:06:26.533355] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:27.725 [2024-07-21 12:06:26.533459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:27.725 [2024-07-21 12:06:26.533488] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:27.725 [2024-07-21 12:06:26.533555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:27.725 [2024-07-21 12:06:26.533567] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:27.725 [2024-07-21 12:06:26.533585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:27.725 [2024-07-21 12:06:26.533593] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:27.725 [2024-07-21 12:06:26.533617] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:27.725 12:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:27.983 [2024-07-21 12:06:26.808770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.983 BaseBdev1 00:24:27.983 12:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:24:27.983 12:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:27.983 12:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:27.983 12:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:27.983 12:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:27.983 12:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:27.983 12:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:28.241 12:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:28.497 [ 00:24:28.497 { 00:24:28.497 "name": "BaseBdev1", 00:24:28.497 "aliases": [ 00:24:28.497 "2e28805c-9657-4d36-8c24-9e71f7b2e3ff" 00:24:28.497 ], 00:24:28.498 "product_name": "Malloc disk", 00:24:28.498 "block_size": 512, 00:24:28.498 "num_blocks": 65536, 00:24:28.498 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:28.498 "assigned_rate_limits": { 00:24:28.498 "rw_ios_per_sec": 0, 00:24:28.498 "rw_mbytes_per_sec": 0, 00:24:28.498 "r_mbytes_per_sec": 0, 00:24:28.498 "w_mbytes_per_sec": 0 00:24:28.498 }, 00:24:28.498 "claimed": true, 00:24:28.498 "claim_type": "exclusive_write", 00:24:28.498 "zoned": false, 00:24:28.498 "supported_io_types": { 00:24:28.498 "read": true, 00:24:28.498 "write": true, 00:24:28.498 "unmap": true, 00:24:28.498 "write_zeroes": true, 00:24:28.498 "flush": true, 00:24:28.498 "reset": true, 00:24:28.498 "compare": false, 00:24:28.498 "compare_and_write": false, 00:24:28.498 "abort": true, 00:24:28.498 "nvme_admin": false, 00:24:28.498 "nvme_io": false 00:24:28.498 }, 00:24:28.498 "memory_domains": [ 00:24:28.498 { 00:24:28.498 "dma_device_id": "system", 00:24:28.498 "dma_device_type": 1 00:24:28.498 }, 00:24:28.498 { 00:24:28.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.498 "dma_device_type": 2 00:24:28.498 } 00:24:28.498 ], 00:24:28.498 "driver_specific": {} 00:24:28.498 } 00:24:28.498 ] 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.498 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.755 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:28.755 "name": "Existed_Raid", 00:24:28.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.755 "strip_size_kb": 0, 00:24:28.755 "state": "configuring", 00:24:28.755 "raid_level": "raid1", 00:24:28.755 "superblock": false, 00:24:28.755 "num_base_bdevs": 4, 00:24:28.755 "num_base_bdevs_discovered": 1, 00:24:28.755 "num_base_bdevs_operational": 4, 00:24:28.755 "base_bdevs_list": [ 00:24:28.755 { 00:24:28.755 "name": "BaseBdev1", 00:24:28.755 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:28.755 "is_configured": true, 00:24:28.755 "data_offset": 0, 00:24:28.755 "data_size": 65536 00:24:28.755 }, 00:24:28.755 { 00:24:28.755 "name": "BaseBdev2", 00:24:28.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.755 "is_configured": false, 00:24:28.755 "data_offset": 0, 00:24:28.755 "data_size": 0 00:24:28.755 }, 00:24:28.755 { 00:24:28.755 "name": "BaseBdev3", 00:24:28.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.755 "is_configured": false, 00:24:28.755 "data_offset": 0, 00:24:28.755 "data_size": 0 00:24:28.755 }, 00:24:28.755 { 00:24:28.755 "name": "BaseBdev4", 00:24:28.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.755 "is_configured": false, 00:24:28.755 "data_offset": 0, 00:24:28.755 "data_size": 0 00:24:28.755 } 00:24:28.755 ] 00:24:28.755 }' 00:24:28.755 12:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:28.755 12:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:29.331 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:29.588 [2024-07-21 12:06:28.357184] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:29.588 [2024-07-21 12:06:28.357291] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:29.588 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:29.846 [2024-07-21 12:06:28.597282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:29.846 [2024-07-21 12:06:28.599625] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:29.846 [2024-07-21 12:06:28.599734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:29.846 [2024-07-21 12:06:28.599749] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:29.846 [2024-07-21 12:06:28.599778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:29.846 [2024-07-21 12:06:28.599788] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:29.846 [2024-07-21 12:06:28.599806] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:29.846 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.104 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:30.104 "name": "Existed_Raid", 00:24:30.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.104 "strip_size_kb": 0, 00:24:30.104 "state": "configuring", 00:24:30.104 "raid_level": "raid1", 00:24:30.104 "superblock": false, 00:24:30.104 "num_base_bdevs": 4, 00:24:30.104 "num_base_bdevs_discovered": 1, 00:24:30.104 "num_base_bdevs_operational": 4, 00:24:30.104 "base_bdevs_list": [ 00:24:30.104 { 00:24:30.104 "name": "BaseBdev1", 00:24:30.104 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:30.104 "is_configured": true, 00:24:30.104 "data_offset": 0, 00:24:30.104 "data_size": 65536 00:24:30.104 }, 00:24:30.104 { 00:24:30.104 "name": "BaseBdev2", 00:24:30.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.104 "is_configured": false, 00:24:30.104 "data_offset": 0, 00:24:30.104 "data_size": 0 00:24:30.104 }, 00:24:30.104 { 00:24:30.104 "name": "BaseBdev3", 00:24:30.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.104 "is_configured": false, 00:24:30.104 "data_offset": 0, 00:24:30.104 "data_size": 0 00:24:30.104 }, 00:24:30.104 { 00:24:30.104 "name": "BaseBdev4", 00:24:30.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.104 "is_configured": false, 00:24:30.104 "data_offset": 0, 00:24:30.104 "data_size": 0 00:24:30.104 } 00:24:30.104 ] 00:24:30.104 }' 00:24:30.104 12:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:30.104 12:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:30.669 12:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:30.927 [2024-07-21 12:06:29.738923] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:30.927 BaseBdev2 00:24:30.927 12:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:24:30.927 12:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:30.927 12:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:30.927 12:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:30.927 12:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:30.927 12:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:30.927 12:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:31.185 12:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:31.443 [ 00:24:31.443 { 00:24:31.443 "name": "BaseBdev2", 00:24:31.443 "aliases": [ 00:24:31.443 "97b9221c-133d-425f-81af-2ae27f18b527" 00:24:31.443 ], 00:24:31.443 "product_name": "Malloc disk", 00:24:31.443 "block_size": 512, 00:24:31.443 "num_blocks": 65536, 00:24:31.443 "uuid": "97b9221c-133d-425f-81af-2ae27f18b527", 00:24:31.443 "assigned_rate_limits": { 00:24:31.443 "rw_ios_per_sec": 0, 00:24:31.443 "rw_mbytes_per_sec": 0, 00:24:31.443 "r_mbytes_per_sec": 0, 00:24:31.443 "w_mbytes_per_sec": 0 00:24:31.443 }, 00:24:31.443 "claimed": true, 00:24:31.443 "claim_type": "exclusive_write", 00:24:31.443 "zoned": false, 00:24:31.443 "supported_io_types": { 00:24:31.443 "read": true, 00:24:31.443 "write": true, 00:24:31.443 "unmap": true, 00:24:31.443 "write_zeroes": true, 00:24:31.443 "flush": true, 00:24:31.443 "reset": true, 00:24:31.443 "compare": false, 00:24:31.443 "compare_and_write": false, 00:24:31.443 "abort": true, 00:24:31.443 "nvme_admin": false, 00:24:31.443 "nvme_io": false 00:24:31.443 }, 00:24:31.443 "memory_domains": [ 00:24:31.443 { 00:24:31.443 "dma_device_id": "system", 00:24:31.443 "dma_device_type": 1 00:24:31.443 }, 00:24:31.443 { 00:24:31.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.443 "dma_device_type": 2 00:24:31.443 } 00:24:31.443 ], 00:24:31.443 "driver_specific": {} 00:24:31.443 } 00:24:31.443 ] 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.443 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:31.701 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:31.701 "name": "Existed_Raid", 00:24:31.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.701 "strip_size_kb": 0, 00:24:31.701 "state": "configuring", 00:24:31.701 "raid_level": "raid1", 00:24:31.701 "superblock": false, 00:24:31.701 "num_base_bdevs": 4, 00:24:31.701 "num_base_bdevs_discovered": 2, 00:24:31.701 "num_base_bdevs_operational": 4, 00:24:31.701 "base_bdevs_list": [ 00:24:31.701 { 00:24:31.701 "name": "BaseBdev1", 00:24:31.701 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:31.701 "is_configured": true, 00:24:31.701 "data_offset": 0, 00:24:31.701 "data_size": 65536 00:24:31.701 }, 00:24:31.701 { 00:24:31.701 "name": "BaseBdev2", 00:24:31.701 "uuid": "97b9221c-133d-425f-81af-2ae27f18b527", 00:24:31.701 "is_configured": true, 00:24:31.701 "data_offset": 0, 00:24:31.701 "data_size": 65536 00:24:31.701 }, 00:24:31.701 { 00:24:31.701 "name": "BaseBdev3", 00:24:31.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.701 "is_configured": false, 00:24:31.701 "data_offset": 0, 00:24:31.701 "data_size": 0 00:24:31.701 }, 00:24:31.701 { 00:24:31.701 "name": "BaseBdev4", 00:24:31.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:31.701 "is_configured": false, 00:24:31.701 "data_offset": 0, 00:24:31.701 "data_size": 0 00:24:31.701 } 00:24:31.701 ] 00:24:31.701 }' 00:24:31.701 12:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:31.701 12:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:32.268 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:32.834 [2024-07-21 12:06:31.408304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:32.834 BaseBdev3 00:24:32.834 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:24:32.834 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:32.834 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:32.834 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:32.834 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:32.834 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:32.834 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:33.092 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:33.092 [ 00:24:33.092 { 00:24:33.092 "name": "BaseBdev3", 00:24:33.092 "aliases": [ 00:24:33.092 "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a" 00:24:33.092 ], 00:24:33.092 "product_name": "Malloc disk", 00:24:33.092 "block_size": 512, 00:24:33.092 "num_blocks": 65536, 00:24:33.092 "uuid": "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a", 00:24:33.092 "assigned_rate_limits": { 00:24:33.092 "rw_ios_per_sec": 0, 00:24:33.092 "rw_mbytes_per_sec": 0, 00:24:33.092 "r_mbytes_per_sec": 0, 00:24:33.092 "w_mbytes_per_sec": 0 00:24:33.092 }, 00:24:33.092 "claimed": true, 00:24:33.092 "claim_type": "exclusive_write", 00:24:33.092 "zoned": false, 00:24:33.092 "supported_io_types": { 00:24:33.092 "read": true, 00:24:33.092 "write": true, 00:24:33.092 "unmap": true, 00:24:33.092 "write_zeroes": true, 00:24:33.092 "flush": true, 00:24:33.092 "reset": true, 00:24:33.092 "compare": false, 00:24:33.092 "compare_and_write": false, 00:24:33.092 "abort": true, 00:24:33.092 "nvme_admin": false, 00:24:33.092 "nvme_io": false 00:24:33.092 }, 00:24:33.092 "memory_domains": [ 00:24:33.092 { 00:24:33.092 "dma_device_id": "system", 00:24:33.093 "dma_device_type": 1 00:24:33.093 }, 00:24:33.093 { 00:24:33.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.093 "dma_device_type": 2 00:24:33.093 } 00:24:33.093 ], 00:24:33.093 "driver_specific": {} 00:24:33.093 } 00:24:33.093 ] 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.093 12:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.350 12:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:33.350 "name": "Existed_Raid", 00:24:33.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.350 "strip_size_kb": 0, 00:24:33.350 "state": "configuring", 00:24:33.350 "raid_level": "raid1", 00:24:33.350 "superblock": false, 00:24:33.350 "num_base_bdevs": 4, 00:24:33.350 "num_base_bdevs_discovered": 3, 00:24:33.350 "num_base_bdevs_operational": 4, 00:24:33.350 "base_bdevs_list": [ 00:24:33.350 { 00:24:33.350 "name": "BaseBdev1", 00:24:33.350 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:33.350 "is_configured": true, 00:24:33.351 "data_offset": 0, 00:24:33.351 "data_size": 65536 00:24:33.351 }, 00:24:33.351 { 00:24:33.351 "name": "BaseBdev2", 00:24:33.351 "uuid": "97b9221c-133d-425f-81af-2ae27f18b527", 00:24:33.351 "is_configured": true, 00:24:33.351 "data_offset": 0, 00:24:33.351 "data_size": 65536 00:24:33.351 }, 00:24:33.351 { 00:24:33.351 "name": "BaseBdev3", 00:24:33.351 "uuid": "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a", 00:24:33.351 "is_configured": true, 00:24:33.351 "data_offset": 0, 00:24:33.351 "data_size": 65536 00:24:33.351 }, 00:24:33.351 { 00:24:33.351 "name": "BaseBdev4", 00:24:33.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.351 "is_configured": false, 00:24:33.351 "data_offset": 0, 00:24:33.351 "data_size": 0 00:24:33.351 } 00:24:33.351 ] 00:24:33.351 }' 00:24:33.351 12:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:33.351 12:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:34.282 12:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:34.282 [2024-07-21 12:06:33.069814] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:34.282 [2024-07-21 12:06:33.069906] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:24:34.282 [2024-07-21 12:06:33.069917] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:34.282 [2024-07-21 12:06:33.070063] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:34.282 [2024-07-21 12:06:33.070505] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:24:34.282 [2024-07-21 12:06:33.070531] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:24:34.282 [2024-07-21 12:06:33.070828] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.282 BaseBdev4 00:24:34.282 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:24:34.282 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:34.282 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:34.282 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:34.282 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:34.282 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:34.282 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.539 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:34.797 [ 00:24:34.797 { 00:24:34.797 "name": "BaseBdev4", 00:24:34.797 "aliases": [ 00:24:34.797 "8913432a-2b91-4251-a593-8b2407f44b1d" 00:24:34.797 ], 00:24:34.797 "product_name": "Malloc disk", 00:24:34.797 "block_size": 512, 00:24:34.797 "num_blocks": 65536, 00:24:34.797 "uuid": "8913432a-2b91-4251-a593-8b2407f44b1d", 00:24:34.797 "assigned_rate_limits": { 00:24:34.797 "rw_ios_per_sec": 0, 00:24:34.797 "rw_mbytes_per_sec": 0, 00:24:34.797 "r_mbytes_per_sec": 0, 00:24:34.797 "w_mbytes_per_sec": 0 00:24:34.797 }, 00:24:34.797 "claimed": true, 00:24:34.797 "claim_type": "exclusive_write", 00:24:34.797 "zoned": false, 00:24:34.797 "supported_io_types": { 00:24:34.797 "read": true, 00:24:34.797 "write": true, 00:24:34.797 "unmap": true, 00:24:34.797 "write_zeroes": true, 00:24:34.797 "flush": true, 00:24:34.797 "reset": true, 00:24:34.797 "compare": false, 00:24:34.797 "compare_and_write": false, 00:24:34.797 "abort": true, 00:24:34.797 "nvme_admin": false, 00:24:34.797 "nvme_io": false 00:24:34.797 }, 00:24:34.797 "memory_domains": [ 00:24:34.797 { 00:24:34.797 "dma_device_id": "system", 00:24:34.797 "dma_device_type": 1 00:24:34.797 }, 00:24:34.797 { 00:24:34.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:34.797 "dma_device_type": 2 00:24:34.797 } 00:24:34.797 ], 00:24:34.797 "driver_specific": {} 00:24:34.797 } 00:24:34.797 ] 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:34.797 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.055 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:35.055 "name": "Existed_Raid", 00:24:35.055 "uuid": "12c0c3e9-1085-4c96-ac6b-7e40d2ad7794", 00:24:35.055 "strip_size_kb": 0, 00:24:35.055 "state": "online", 00:24:35.055 "raid_level": "raid1", 00:24:35.055 "superblock": false, 00:24:35.055 "num_base_bdevs": 4, 00:24:35.055 "num_base_bdevs_discovered": 4, 00:24:35.055 "num_base_bdevs_operational": 4, 00:24:35.055 "base_bdevs_list": [ 00:24:35.055 { 00:24:35.055 "name": "BaseBdev1", 00:24:35.055 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:35.055 "is_configured": true, 00:24:35.055 "data_offset": 0, 00:24:35.055 "data_size": 65536 00:24:35.055 }, 00:24:35.055 { 00:24:35.055 "name": "BaseBdev2", 00:24:35.055 "uuid": "97b9221c-133d-425f-81af-2ae27f18b527", 00:24:35.055 "is_configured": true, 00:24:35.055 "data_offset": 0, 00:24:35.055 "data_size": 65536 00:24:35.055 }, 00:24:35.055 { 00:24:35.055 "name": "BaseBdev3", 00:24:35.055 "uuid": "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a", 00:24:35.055 "is_configured": true, 00:24:35.055 "data_offset": 0, 00:24:35.055 "data_size": 65536 00:24:35.055 }, 00:24:35.055 { 00:24:35.055 "name": "BaseBdev4", 00:24:35.055 "uuid": "8913432a-2b91-4251-a593-8b2407f44b1d", 00:24:35.055 "is_configured": true, 00:24:35.055 "data_offset": 0, 00:24:35.055 "data_size": 65536 00:24:35.055 } 00:24:35.055 ] 00:24:35.055 }' 00:24:35.055 12:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:35.055 12:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:35.988 [2024-07-21 12:06:34.766578] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:35.988 "name": "Existed_Raid", 00:24:35.988 "aliases": [ 00:24:35.988 "12c0c3e9-1085-4c96-ac6b-7e40d2ad7794" 00:24:35.988 ], 00:24:35.988 "product_name": "Raid Volume", 00:24:35.988 "block_size": 512, 00:24:35.988 "num_blocks": 65536, 00:24:35.988 "uuid": "12c0c3e9-1085-4c96-ac6b-7e40d2ad7794", 00:24:35.988 "assigned_rate_limits": { 00:24:35.988 "rw_ios_per_sec": 0, 00:24:35.988 "rw_mbytes_per_sec": 0, 00:24:35.988 "r_mbytes_per_sec": 0, 00:24:35.988 "w_mbytes_per_sec": 0 00:24:35.988 }, 00:24:35.988 "claimed": false, 00:24:35.988 "zoned": false, 00:24:35.988 "supported_io_types": { 00:24:35.988 "read": true, 00:24:35.988 "write": true, 00:24:35.988 "unmap": false, 00:24:35.988 "write_zeroes": true, 00:24:35.988 "flush": false, 00:24:35.988 "reset": true, 00:24:35.988 "compare": false, 00:24:35.988 "compare_and_write": false, 00:24:35.988 "abort": false, 00:24:35.988 "nvme_admin": false, 00:24:35.988 "nvme_io": false 00:24:35.988 }, 00:24:35.988 "memory_domains": [ 00:24:35.988 { 00:24:35.988 "dma_device_id": "system", 00:24:35.988 "dma_device_type": 1 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.988 "dma_device_type": 2 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "dma_device_id": "system", 00:24:35.988 "dma_device_type": 1 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.988 "dma_device_type": 2 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "dma_device_id": "system", 00:24:35.988 "dma_device_type": 1 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.988 "dma_device_type": 2 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "dma_device_id": "system", 00:24:35.988 "dma_device_type": 1 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.988 "dma_device_type": 2 00:24:35.988 } 00:24:35.988 ], 00:24:35.988 "driver_specific": { 00:24:35.988 "raid": { 00:24:35.988 "uuid": "12c0c3e9-1085-4c96-ac6b-7e40d2ad7794", 00:24:35.988 "strip_size_kb": 0, 00:24:35.988 "state": "online", 00:24:35.988 "raid_level": "raid1", 00:24:35.988 "superblock": false, 00:24:35.988 "num_base_bdevs": 4, 00:24:35.988 "num_base_bdevs_discovered": 4, 00:24:35.988 "num_base_bdevs_operational": 4, 00:24:35.988 "base_bdevs_list": [ 00:24:35.988 { 00:24:35.988 "name": "BaseBdev1", 00:24:35.988 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:35.988 "is_configured": true, 00:24:35.988 "data_offset": 0, 00:24:35.988 "data_size": 65536 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "name": "BaseBdev2", 00:24:35.988 "uuid": "97b9221c-133d-425f-81af-2ae27f18b527", 00:24:35.988 "is_configured": true, 00:24:35.988 "data_offset": 0, 00:24:35.988 "data_size": 65536 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "name": "BaseBdev3", 00:24:35.988 "uuid": "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a", 00:24:35.988 "is_configured": true, 00:24:35.988 "data_offset": 0, 00:24:35.988 "data_size": 65536 00:24:35.988 }, 00:24:35.988 { 00:24:35.988 "name": "BaseBdev4", 00:24:35.988 "uuid": "8913432a-2b91-4251-a593-8b2407f44b1d", 00:24:35.988 "is_configured": true, 00:24:35.988 "data_offset": 0, 00:24:35.988 "data_size": 65536 00:24:35.988 } 00:24:35.988 ] 00:24:35.988 } 00:24:35.988 } 00:24:35.988 }' 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:24:35.988 BaseBdev2 00:24:35.988 BaseBdev3 00:24:35.988 BaseBdev4' 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:24:35.988 12:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:36.554 "name": "BaseBdev1", 00:24:36.554 "aliases": [ 00:24:36.554 "2e28805c-9657-4d36-8c24-9e71f7b2e3ff" 00:24:36.554 ], 00:24:36.554 "product_name": "Malloc disk", 00:24:36.554 "block_size": 512, 00:24:36.554 "num_blocks": 65536, 00:24:36.554 "uuid": "2e28805c-9657-4d36-8c24-9e71f7b2e3ff", 00:24:36.554 "assigned_rate_limits": { 00:24:36.554 "rw_ios_per_sec": 0, 00:24:36.554 "rw_mbytes_per_sec": 0, 00:24:36.554 "r_mbytes_per_sec": 0, 00:24:36.554 "w_mbytes_per_sec": 0 00:24:36.554 }, 00:24:36.554 "claimed": true, 00:24:36.554 "claim_type": "exclusive_write", 00:24:36.554 "zoned": false, 00:24:36.554 "supported_io_types": { 00:24:36.554 "read": true, 00:24:36.554 "write": true, 00:24:36.554 "unmap": true, 00:24:36.554 "write_zeroes": true, 00:24:36.554 "flush": true, 00:24:36.554 "reset": true, 00:24:36.554 "compare": false, 00:24:36.554 "compare_and_write": false, 00:24:36.554 "abort": true, 00:24:36.554 "nvme_admin": false, 00:24:36.554 "nvme_io": false 00:24:36.554 }, 00:24:36.554 "memory_domains": [ 00:24:36.554 { 00:24:36.554 "dma_device_id": "system", 00:24:36.554 "dma_device_type": 1 00:24:36.554 }, 00:24:36.554 { 00:24:36.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:36.554 "dma_device_type": 2 00:24:36.554 } 00:24:36.554 ], 00:24:36.554 "driver_specific": {} 00:24:36.554 }' 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:36.554 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:36.812 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:36.812 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:36.812 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:36.812 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:36.812 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:37.068 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:37.068 "name": "BaseBdev2", 00:24:37.068 "aliases": [ 00:24:37.068 "97b9221c-133d-425f-81af-2ae27f18b527" 00:24:37.068 ], 00:24:37.068 "product_name": "Malloc disk", 00:24:37.068 "block_size": 512, 00:24:37.068 "num_blocks": 65536, 00:24:37.068 "uuid": "97b9221c-133d-425f-81af-2ae27f18b527", 00:24:37.068 "assigned_rate_limits": { 00:24:37.068 "rw_ios_per_sec": 0, 00:24:37.068 "rw_mbytes_per_sec": 0, 00:24:37.068 "r_mbytes_per_sec": 0, 00:24:37.068 "w_mbytes_per_sec": 0 00:24:37.068 }, 00:24:37.068 "claimed": true, 00:24:37.068 "claim_type": "exclusive_write", 00:24:37.069 "zoned": false, 00:24:37.069 "supported_io_types": { 00:24:37.069 "read": true, 00:24:37.069 "write": true, 00:24:37.069 "unmap": true, 00:24:37.069 "write_zeroes": true, 00:24:37.069 "flush": true, 00:24:37.069 "reset": true, 00:24:37.069 "compare": false, 00:24:37.069 "compare_and_write": false, 00:24:37.069 "abort": true, 00:24:37.069 "nvme_admin": false, 00:24:37.069 "nvme_io": false 00:24:37.069 }, 00:24:37.069 "memory_domains": [ 00:24:37.069 { 00:24:37.069 "dma_device_id": "system", 00:24:37.069 "dma_device_type": 1 00:24:37.069 }, 00:24:37.069 { 00:24:37.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.069 "dma_device_type": 2 00:24:37.069 } 00:24:37.069 ], 00:24:37.069 "driver_specific": {} 00:24:37.069 }' 00:24:37.069 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.069 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.069 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:37.069 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.069 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.326 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:37.326 12:06:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:37.326 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:37.584 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:37.584 "name": "BaseBdev3", 00:24:37.584 "aliases": [ 00:24:37.584 "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a" 00:24:37.584 ], 00:24:37.584 "product_name": "Malloc disk", 00:24:37.584 "block_size": 512, 00:24:37.584 "num_blocks": 65536, 00:24:37.584 "uuid": "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a", 00:24:37.584 "assigned_rate_limits": { 00:24:37.584 "rw_ios_per_sec": 0, 00:24:37.584 "rw_mbytes_per_sec": 0, 00:24:37.584 "r_mbytes_per_sec": 0, 00:24:37.584 "w_mbytes_per_sec": 0 00:24:37.584 }, 00:24:37.584 "claimed": true, 00:24:37.584 "claim_type": "exclusive_write", 00:24:37.584 "zoned": false, 00:24:37.584 "supported_io_types": { 00:24:37.584 "read": true, 00:24:37.584 "write": true, 00:24:37.584 "unmap": true, 00:24:37.584 "write_zeroes": true, 00:24:37.584 "flush": true, 00:24:37.584 "reset": true, 00:24:37.584 "compare": false, 00:24:37.584 "compare_and_write": false, 00:24:37.584 "abort": true, 00:24:37.584 "nvme_admin": false, 00:24:37.584 "nvme_io": false 00:24:37.584 }, 00:24:37.584 "memory_domains": [ 00:24:37.584 { 00:24:37.584 "dma_device_id": "system", 00:24:37.584 "dma_device_type": 1 00:24:37.584 }, 00:24:37.584 { 00:24:37.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:37.584 "dma_device_type": 2 00:24:37.584 } 00:24:37.584 ], 00:24:37.584 "driver_specific": {} 00:24:37.584 }' 00:24:37.584 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.842 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:37.842 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:37.842 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.842 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:37.842 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:37.843 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:37.843 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.100 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:38.100 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.100 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.100 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:38.100 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:38.100 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:38.100 12:06:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:38.357 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:38.357 "name": "BaseBdev4", 00:24:38.357 "aliases": [ 00:24:38.357 "8913432a-2b91-4251-a593-8b2407f44b1d" 00:24:38.357 ], 00:24:38.357 "product_name": "Malloc disk", 00:24:38.357 "block_size": 512, 00:24:38.357 "num_blocks": 65536, 00:24:38.357 "uuid": "8913432a-2b91-4251-a593-8b2407f44b1d", 00:24:38.357 "assigned_rate_limits": { 00:24:38.357 "rw_ios_per_sec": 0, 00:24:38.357 "rw_mbytes_per_sec": 0, 00:24:38.357 "r_mbytes_per_sec": 0, 00:24:38.357 "w_mbytes_per_sec": 0 00:24:38.357 }, 00:24:38.357 "claimed": true, 00:24:38.357 "claim_type": "exclusive_write", 00:24:38.357 "zoned": false, 00:24:38.357 "supported_io_types": { 00:24:38.357 "read": true, 00:24:38.357 "write": true, 00:24:38.357 "unmap": true, 00:24:38.357 "write_zeroes": true, 00:24:38.357 "flush": true, 00:24:38.357 "reset": true, 00:24:38.357 "compare": false, 00:24:38.357 "compare_and_write": false, 00:24:38.357 "abort": true, 00:24:38.357 "nvme_admin": false, 00:24:38.357 "nvme_io": false 00:24:38.357 }, 00:24:38.357 "memory_domains": [ 00:24:38.357 { 00:24:38.357 "dma_device_id": "system", 00:24:38.357 "dma_device_type": 1 00:24:38.357 }, 00:24:38.357 { 00:24:38.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:38.357 "dma_device_type": 2 00:24:38.357 } 00:24:38.357 ], 00:24:38.357 "driver_specific": {} 00:24:38.357 }' 00:24:38.357 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.357 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:38.357 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:38.357 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.614 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:38.614 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:38.614 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.614 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:38.614 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:38.614 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.614 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:38.871 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:38.871 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:38.871 [2024-07-21 12:06:37.727075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:39.128 "name": "Existed_Raid", 00:24:39.128 "uuid": "12c0c3e9-1085-4c96-ac6b-7e40d2ad7794", 00:24:39.128 "strip_size_kb": 0, 00:24:39.128 "state": "online", 00:24:39.128 "raid_level": "raid1", 00:24:39.128 "superblock": false, 00:24:39.128 "num_base_bdevs": 4, 00:24:39.128 "num_base_bdevs_discovered": 3, 00:24:39.128 "num_base_bdevs_operational": 3, 00:24:39.128 "base_bdevs_list": [ 00:24:39.128 { 00:24:39.128 "name": null, 00:24:39.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:39.128 "is_configured": false, 00:24:39.128 "data_offset": 0, 00:24:39.128 "data_size": 65536 00:24:39.128 }, 00:24:39.128 { 00:24:39.128 "name": "BaseBdev2", 00:24:39.128 "uuid": "97b9221c-133d-425f-81af-2ae27f18b527", 00:24:39.128 "is_configured": true, 00:24:39.128 "data_offset": 0, 00:24:39.128 "data_size": 65536 00:24:39.128 }, 00:24:39.128 { 00:24:39.128 "name": "BaseBdev3", 00:24:39.128 "uuid": "e3489a1d-4f56-4cab-b4ab-0c8545e2af3a", 00:24:39.128 "is_configured": true, 00:24:39.128 "data_offset": 0, 00:24:39.128 "data_size": 65536 00:24:39.128 }, 00:24:39.128 { 00:24:39.128 "name": "BaseBdev4", 00:24:39.128 "uuid": "8913432a-2b91-4251-a593-8b2407f44b1d", 00:24:39.128 "is_configured": true, 00:24:39.128 "data_offset": 0, 00:24:39.128 "data_size": 65536 00:24:39.128 } 00:24:39.128 ] 00:24:39.128 }' 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:39.128 12:06:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:40.074 12:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:24:40.074 12:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:40.074 12:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.074 12:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:40.074 12:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:40.074 12:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:40.074 12:06:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:40.332 [2024-07-21 12:06:39.180244] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:40.589 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:40.590 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:40.590 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:40.590 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:40.847 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:40.847 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:40.847 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:41.104 [2024-07-21 12:06:39.746988] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:41.104 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:41.104 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:41.104 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.104 12:06:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:24:41.363 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:24:41.363 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:41.363 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:41.363 [2024-07-21 12:06:40.226694] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:41.363 [2024-07-21 12:06:40.226827] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:41.621 [2024-07-21 12:06:40.239240] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:41.621 [2024-07-21 12:06:40.239295] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:41.621 [2024-07-21 12:06:40.239308] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:24:41.621 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:24:41.621 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:24:41.621 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:41.621 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:24:41.880 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:24:41.880 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:24:41.880 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:24:41.880 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:24:41.880 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:41.880 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:42.139 BaseBdev2 00:24:42.139 12:06:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:24:42.139 12:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:24:42.139 12:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:42.139 12:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:42.139 12:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:42.139 12:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:42.139 12:06:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:42.397 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:42.655 [ 00:24:42.655 { 00:24:42.655 "name": "BaseBdev2", 00:24:42.655 "aliases": [ 00:24:42.655 "6c465db3-e4ae-4c90-87d2-87720706780a" 00:24:42.655 ], 00:24:42.655 "product_name": "Malloc disk", 00:24:42.655 "block_size": 512, 00:24:42.655 "num_blocks": 65536, 00:24:42.655 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:42.655 "assigned_rate_limits": { 00:24:42.655 "rw_ios_per_sec": 0, 00:24:42.655 "rw_mbytes_per_sec": 0, 00:24:42.655 "r_mbytes_per_sec": 0, 00:24:42.655 "w_mbytes_per_sec": 0 00:24:42.655 }, 00:24:42.655 "claimed": false, 00:24:42.655 "zoned": false, 00:24:42.655 "supported_io_types": { 00:24:42.655 "read": true, 00:24:42.655 "write": true, 00:24:42.655 "unmap": true, 00:24:42.655 "write_zeroes": true, 00:24:42.655 "flush": true, 00:24:42.655 "reset": true, 00:24:42.655 "compare": false, 00:24:42.655 "compare_and_write": false, 00:24:42.655 "abort": true, 00:24:42.655 "nvme_admin": false, 00:24:42.655 "nvme_io": false 00:24:42.655 }, 00:24:42.655 "memory_domains": [ 00:24:42.655 { 00:24:42.655 "dma_device_id": "system", 00:24:42.655 "dma_device_type": 1 00:24:42.655 }, 00:24:42.655 { 00:24:42.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:42.655 "dma_device_type": 2 00:24:42.655 } 00:24:42.655 ], 00:24:42.655 "driver_specific": {} 00:24:42.655 } 00:24:42.655 ] 00:24:42.655 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:42.655 12:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:42.655 12:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:42.655 12:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:42.914 BaseBdev3 00:24:42.914 12:06:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:24:42.914 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:24:42.914 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:42.914 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:42.914 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:42.914 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:42.914 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:43.171 12:06:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:43.171 [ 00:24:43.171 { 00:24:43.171 "name": "BaseBdev3", 00:24:43.171 "aliases": [ 00:24:43.171 "d67d9072-5a16-47df-a762-c1fda86f1e75" 00:24:43.171 ], 00:24:43.171 "product_name": "Malloc disk", 00:24:43.171 "block_size": 512, 00:24:43.171 "num_blocks": 65536, 00:24:43.171 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:43.171 "assigned_rate_limits": { 00:24:43.171 "rw_ios_per_sec": 0, 00:24:43.171 "rw_mbytes_per_sec": 0, 00:24:43.171 "r_mbytes_per_sec": 0, 00:24:43.171 "w_mbytes_per_sec": 0 00:24:43.171 }, 00:24:43.171 "claimed": false, 00:24:43.171 "zoned": false, 00:24:43.171 "supported_io_types": { 00:24:43.171 "read": true, 00:24:43.171 "write": true, 00:24:43.171 "unmap": true, 00:24:43.171 "write_zeroes": true, 00:24:43.171 "flush": true, 00:24:43.171 "reset": true, 00:24:43.171 "compare": false, 00:24:43.171 "compare_and_write": false, 00:24:43.171 "abort": true, 00:24:43.171 "nvme_admin": false, 00:24:43.171 "nvme_io": false 00:24:43.171 }, 00:24:43.171 "memory_domains": [ 00:24:43.171 { 00:24:43.171 "dma_device_id": "system", 00:24:43.171 "dma_device_type": 1 00:24:43.171 }, 00:24:43.171 { 00:24:43.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.171 "dma_device_type": 2 00:24:43.171 } 00:24:43.171 ], 00:24:43.171 "driver_specific": {} 00:24:43.171 } 00:24:43.171 ] 00:24:43.171 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:43.171 12:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:43.171 12:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:43.171 12:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:43.427 BaseBdev4 00:24:43.427 12:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:24:43.427 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:24:43.427 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:43.427 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:43.427 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:43.427 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:43.427 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:43.684 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:43.941 [ 00:24:43.941 { 00:24:43.941 "name": "BaseBdev4", 00:24:43.941 "aliases": [ 00:24:43.941 "f2d4f438-0876-4b85-ba96-477a88086442" 00:24:43.941 ], 00:24:43.941 "product_name": "Malloc disk", 00:24:43.941 "block_size": 512, 00:24:43.941 "num_blocks": 65536, 00:24:43.941 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:43.941 "assigned_rate_limits": { 00:24:43.941 "rw_ios_per_sec": 0, 00:24:43.941 "rw_mbytes_per_sec": 0, 00:24:43.941 "r_mbytes_per_sec": 0, 00:24:43.941 "w_mbytes_per_sec": 0 00:24:43.941 }, 00:24:43.941 "claimed": false, 00:24:43.941 "zoned": false, 00:24:43.941 "supported_io_types": { 00:24:43.941 "read": true, 00:24:43.941 "write": true, 00:24:43.942 "unmap": true, 00:24:43.942 "write_zeroes": true, 00:24:43.942 "flush": true, 00:24:43.942 "reset": true, 00:24:43.942 "compare": false, 00:24:43.942 "compare_and_write": false, 00:24:43.942 "abort": true, 00:24:43.942 "nvme_admin": false, 00:24:43.942 "nvme_io": false 00:24:43.942 }, 00:24:43.942 "memory_domains": [ 00:24:43.942 { 00:24:43.942 "dma_device_id": "system", 00:24:43.942 "dma_device_type": 1 00:24:43.942 }, 00:24:43.942 { 00:24:43.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:43.942 "dma_device_type": 2 00:24:43.942 } 00:24:43.942 ], 00:24:43.942 "driver_specific": {} 00:24:43.942 } 00:24:43.942 ] 00:24:43.942 12:06:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:43.942 12:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:24:43.942 12:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:24:43.942 12:06:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:44.199 [2024-07-21 12:06:42.987025] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:44.199 [2024-07-21 12:06:42.987139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:44.199 [2024-07-21 12:06:42.987181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:44.199 [2024-07-21 12:06:42.989398] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:44.199 [2024-07-21 12:06:42.989466] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:44.199 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.456 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:44.456 "name": "Existed_Raid", 00:24:44.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.456 "strip_size_kb": 0, 00:24:44.456 "state": "configuring", 00:24:44.456 "raid_level": "raid1", 00:24:44.456 "superblock": false, 00:24:44.456 "num_base_bdevs": 4, 00:24:44.456 "num_base_bdevs_discovered": 3, 00:24:44.456 "num_base_bdevs_operational": 4, 00:24:44.456 "base_bdevs_list": [ 00:24:44.456 { 00:24:44.456 "name": "BaseBdev1", 00:24:44.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:44.456 "is_configured": false, 00:24:44.456 "data_offset": 0, 00:24:44.456 "data_size": 0 00:24:44.456 }, 00:24:44.456 { 00:24:44.456 "name": "BaseBdev2", 00:24:44.456 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:44.456 "is_configured": true, 00:24:44.456 "data_offset": 0, 00:24:44.456 "data_size": 65536 00:24:44.456 }, 00:24:44.456 { 00:24:44.456 "name": "BaseBdev3", 00:24:44.456 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:44.456 "is_configured": true, 00:24:44.456 "data_offset": 0, 00:24:44.456 "data_size": 65536 00:24:44.456 }, 00:24:44.456 { 00:24:44.456 "name": "BaseBdev4", 00:24:44.456 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:44.456 "is_configured": true, 00:24:44.456 "data_offset": 0, 00:24:44.456 "data_size": 65536 00:24:44.456 } 00:24:44.456 ] 00:24:44.456 }' 00:24:44.456 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:44.456 12:06:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:45.020 12:06:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:24:45.277 [2024-07-21 12:06:44.095326] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:45.277 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.534 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:45.534 "name": "Existed_Raid", 00:24:45.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.534 "strip_size_kb": 0, 00:24:45.534 "state": "configuring", 00:24:45.534 "raid_level": "raid1", 00:24:45.534 "superblock": false, 00:24:45.534 "num_base_bdevs": 4, 00:24:45.534 "num_base_bdevs_discovered": 2, 00:24:45.535 "num_base_bdevs_operational": 4, 00:24:45.535 "base_bdevs_list": [ 00:24:45.535 { 00:24:45.535 "name": "BaseBdev1", 00:24:45.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:45.535 "is_configured": false, 00:24:45.535 "data_offset": 0, 00:24:45.535 "data_size": 0 00:24:45.535 }, 00:24:45.535 { 00:24:45.535 "name": null, 00:24:45.535 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:45.535 "is_configured": false, 00:24:45.535 "data_offset": 0, 00:24:45.535 "data_size": 65536 00:24:45.535 }, 00:24:45.535 { 00:24:45.535 "name": "BaseBdev3", 00:24:45.535 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:45.535 "is_configured": true, 00:24:45.535 "data_offset": 0, 00:24:45.535 "data_size": 65536 00:24:45.535 }, 00:24:45.535 { 00:24:45.535 "name": "BaseBdev4", 00:24:45.535 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:45.535 "is_configured": true, 00:24:45.535 "data_offset": 0, 00:24:45.535 "data_size": 65536 00:24:45.535 } 00:24:45.535 ] 00:24:45.535 }' 00:24:45.535 12:06:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:45.535 12:06:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:46.465 12:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:46.465 12:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:46.465 12:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:24:46.465 12:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:46.722 [2024-07-21 12:06:45.584122] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:46.722 BaseBdev1 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:46.979 12:06:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:47.236 [ 00:24:47.236 { 00:24:47.236 "name": "BaseBdev1", 00:24:47.236 "aliases": [ 00:24:47.236 "dbb108ce-d90c-405c-860c-c3cf2e395377" 00:24:47.236 ], 00:24:47.236 "product_name": "Malloc disk", 00:24:47.236 "block_size": 512, 00:24:47.236 "num_blocks": 65536, 00:24:47.236 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:47.236 "assigned_rate_limits": { 00:24:47.236 "rw_ios_per_sec": 0, 00:24:47.236 "rw_mbytes_per_sec": 0, 00:24:47.236 "r_mbytes_per_sec": 0, 00:24:47.236 "w_mbytes_per_sec": 0 00:24:47.236 }, 00:24:47.236 "claimed": true, 00:24:47.236 "claim_type": "exclusive_write", 00:24:47.236 "zoned": false, 00:24:47.236 "supported_io_types": { 00:24:47.236 "read": true, 00:24:47.236 "write": true, 00:24:47.236 "unmap": true, 00:24:47.236 "write_zeroes": true, 00:24:47.236 "flush": true, 00:24:47.236 "reset": true, 00:24:47.236 "compare": false, 00:24:47.236 "compare_and_write": false, 00:24:47.236 "abort": true, 00:24:47.236 "nvme_admin": false, 00:24:47.236 "nvme_io": false 00:24:47.236 }, 00:24:47.236 "memory_domains": [ 00:24:47.236 { 00:24:47.236 "dma_device_id": "system", 00:24:47.236 "dma_device_type": 1 00:24:47.236 }, 00:24:47.236 { 00:24:47.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:47.236 "dma_device_type": 2 00:24:47.236 } 00:24:47.236 ], 00:24:47.236 "driver_specific": {} 00:24:47.236 } 00:24:47.236 ] 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.236 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:47.493 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:47.493 "name": "Existed_Raid", 00:24:47.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:47.493 "strip_size_kb": 0, 00:24:47.493 "state": "configuring", 00:24:47.493 "raid_level": "raid1", 00:24:47.493 "superblock": false, 00:24:47.493 "num_base_bdevs": 4, 00:24:47.493 "num_base_bdevs_discovered": 3, 00:24:47.493 "num_base_bdevs_operational": 4, 00:24:47.493 "base_bdevs_list": [ 00:24:47.493 { 00:24:47.493 "name": "BaseBdev1", 00:24:47.493 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:47.493 "is_configured": true, 00:24:47.493 "data_offset": 0, 00:24:47.493 "data_size": 65536 00:24:47.493 }, 00:24:47.493 { 00:24:47.493 "name": null, 00:24:47.493 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:47.493 "is_configured": false, 00:24:47.493 "data_offset": 0, 00:24:47.493 "data_size": 65536 00:24:47.493 }, 00:24:47.493 { 00:24:47.493 "name": "BaseBdev3", 00:24:47.493 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:47.493 "is_configured": true, 00:24:47.493 "data_offset": 0, 00:24:47.493 "data_size": 65536 00:24:47.493 }, 00:24:47.493 { 00:24:47.493 "name": "BaseBdev4", 00:24:47.493 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:47.493 "is_configured": true, 00:24:47.493 "data_offset": 0, 00:24:47.493 "data_size": 65536 00:24:47.493 } 00:24:47.493 ] 00:24:47.493 }' 00:24:47.493 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:47.493 12:06:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:48.423 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.423 12:06:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:48.423 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:24:48.423 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:24:48.680 [2024-07-21 12:06:47.420603] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:48.680 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:48.681 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:48.681 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.681 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:48.938 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:48.938 "name": "Existed_Raid", 00:24:48.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:48.938 "strip_size_kb": 0, 00:24:48.938 "state": "configuring", 00:24:48.938 "raid_level": "raid1", 00:24:48.938 "superblock": false, 00:24:48.938 "num_base_bdevs": 4, 00:24:48.938 "num_base_bdevs_discovered": 2, 00:24:48.938 "num_base_bdevs_operational": 4, 00:24:48.938 "base_bdevs_list": [ 00:24:48.938 { 00:24:48.938 "name": "BaseBdev1", 00:24:48.938 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:48.938 "is_configured": true, 00:24:48.938 "data_offset": 0, 00:24:48.938 "data_size": 65536 00:24:48.938 }, 00:24:48.938 { 00:24:48.938 "name": null, 00:24:48.938 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:48.938 "is_configured": false, 00:24:48.938 "data_offset": 0, 00:24:48.938 "data_size": 65536 00:24:48.938 }, 00:24:48.938 { 00:24:48.938 "name": null, 00:24:48.938 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:48.938 "is_configured": false, 00:24:48.938 "data_offset": 0, 00:24:48.938 "data_size": 65536 00:24:48.938 }, 00:24:48.938 { 00:24:48.938 "name": "BaseBdev4", 00:24:48.938 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:48.938 "is_configured": true, 00:24:48.938 "data_offset": 0, 00:24:48.938 "data_size": 65536 00:24:48.938 } 00:24:48.938 ] 00:24:48.938 }' 00:24:48.938 12:06:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:48.938 12:06:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:49.885 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.885 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:49.885 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:24:49.885 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:24:50.143 [2024-07-21 12:06:48.825012] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.143 12:06:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:50.401 12:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:50.401 "name": "Existed_Raid", 00:24:50.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:50.401 "strip_size_kb": 0, 00:24:50.401 "state": "configuring", 00:24:50.401 "raid_level": "raid1", 00:24:50.401 "superblock": false, 00:24:50.401 "num_base_bdevs": 4, 00:24:50.401 "num_base_bdevs_discovered": 3, 00:24:50.401 "num_base_bdevs_operational": 4, 00:24:50.401 "base_bdevs_list": [ 00:24:50.401 { 00:24:50.401 "name": "BaseBdev1", 00:24:50.401 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:50.401 "is_configured": true, 00:24:50.401 "data_offset": 0, 00:24:50.401 "data_size": 65536 00:24:50.401 }, 00:24:50.401 { 00:24:50.401 "name": null, 00:24:50.401 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:50.401 "is_configured": false, 00:24:50.401 "data_offset": 0, 00:24:50.401 "data_size": 65536 00:24:50.401 }, 00:24:50.401 { 00:24:50.401 "name": "BaseBdev3", 00:24:50.401 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:50.401 "is_configured": true, 00:24:50.401 "data_offset": 0, 00:24:50.401 "data_size": 65536 00:24:50.401 }, 00:24:50.401 { 00:24:50.401 "name": "BaseBdev4", 00:24:50.401 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:50.401 "is_configured": true, 00:24:50.401 "data_offset": 0, 00:24:50.401 "data_size": 65536 00:24:50.401 } 00:24:50.401 ] 00:24:50.401 }' 00:24:50.401 12:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:50.401 12:06:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:50.966 12:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.966 12:06:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:24:51.223 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:24:51.223 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:51.481 [2024-07-21 12:06:50.329283] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.738 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:51.996 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:51.996 "name": "Existed_Raid", 00:24:51.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.996 "strip_size_kb": 0, 00:24:51.996 "state": "configuring", 00:24:51.996 "raid_level": "raid1", 00:24:51.996 "superblock": false, 00:24:51.996 "num_base_bdevs": 4, 00:24:51.996 "num_base_bdevs_discovered": 2, 00:24:51.996 "num_base_bdevs_operational": 4, 00:24:51.996 "base_bdevs_list": [ 00:24:51.996 { 00:24:51.996 "name": null, 00:24:51.996 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:51.996 "is_configured": false, 00:24:51.996 "data_offset": 0, 00:24:51.996 "data_size": 65536 00:24:51.996 }, 00:24:51.996 { 00:24:51.996 "name": null, 00:24:51.996 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:51.996 "is_configured": false, 00:24:51.996 "data_offset": 0, 00:24:51.996 "data_size": 65536 00:24:51.996 }, 00:24:51.996 { 00:24:51.996 "name": "BaseBdev3", 00:24:51.996 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:51.996 "is_configured": true, 00:24:51.996 "data_offset": 0, 00:24:51.996 "data_size": 65536 00:24:51.996 }, 00:24:51.996 { 00:24:51.996 "name": "BaseBdev4", 00:24:51.996 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:51.996 "is_configured": true, 00:24:51.996 "data_offset": 0, 00:24:51.996 "data_size": 65536 00:24:51.996 } 00:24:51.996 ] 00:24:51.996 }' 00:24:51.996 12:06:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:51.996 12:06:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:52.562 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.562 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:24:52.820 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:24:52.820 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:24:53.078 [2024-07-21 12:06:51.809652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.078 12:06:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:53.336 12:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:53.336 "name": "Existed_Raid", 00:24:53.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.336 "strip_size_kb": 0, 00:24:53.336 "state": "configuring", 00:24:53.336 "raid_level": "raid1", 00:24:53.336 "superblock": false, 00:24:53.336 "num_base_bdevs": 4, 00:24:53.336 "num_base_bdevs_discovered": 3, 00:24:53.336 "num_base_bdevs_operational": 4, 00:24:53.336 "base_bdevs_list": [ 00:24:53.336 { 00:24:53.336 "name": null, 00:24:53.336 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:53.336 "is_configured": false, 00:24:53.336 "data_offset": 0, 00:24:53.336 "data_size": 65536 00:24:53.336 }, 00:24:53.336 { 00:24:53.336 "name": "BaseBdev2", 00:24:53.336 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:53.336 "is_configured": true, 00:24:53.336 "data_offset": 0, 00:24:53.336 "data_size": 65536 00:24:53.336 }, 00:24:53.336 { 00:24:53.336 "name": "BaseBdev3", 00:24:53.336 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:53.336 "is_configured": true, 00:24:53.336 "data_offset": 0, 00:24:53.336 "data_size": 65536 00:24:53.336 }, 00:24:53.336 { 00:24:53.336 "name": "BaseBdev4", 00:24:53.336 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:53.336 "is_configured": true, 00:24:53.336 "data_offset": 0, 00:24:53.336 "data_size": 65536 00:24:53.336 } 00:24:53.336 ] 00:24:53.336 }' 00:24:53.336 12:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:53.336 12:06:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:53.912 12:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.912 12:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:24:54.204 12:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:24:54.204 12:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.204 12:06:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:24:54.487 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u dbb108ce-d90c-405c-860c-c3cf2e395377 00:24:54.744 [2024-07-21 12:06:53.387474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:24:54.744 [2024-07-21 12:06:53.387567] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:24:54.744 [2024-07-21 12:06:53.387579] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:24:54.744 [2024-07-21 12:06:53.387685] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:24:54.744 [2024-07-21 12:06:53.388087] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:24:54.744 [2024-07-21 12:06:53.388122] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:24:54.744 [2024-07-21 12:06:53.388383] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:54.744 NewBaseBdev 00:24:54.744 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:24:54.744 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:24:54.744 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:24:54.744 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local i 00:24:54.744 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:24:54.744 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:24:54.744 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:55.000 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:24:55.000 [ 00:24:55.000 { 00:24:55.000 "name": "NewBaseBdev", 00:24:55.000 "aliases": [ 00:24:55.000 "dbb108ce-d90c-405c-860c-c3cf2e395377" 00:24:55.000 ], 00:24:55.000 "product_name": "Malloc disk", 00:24:55.000 "block_size": 512, 00:24:55.000 "num_blocks": 65536, 00:24:55.000 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:55.000 "assigned_rate_limits": { 00:24:55.000 "rw_ios_per_sec": 0, 00:24:55.000 "rw_mbytes_per_sec": 0, 00:24:55.000 "r_mbytes_per_sec": 0, 00:24:55.000 "w_mbytes_per_sec": 0 00:24:55.000 }, 00:24:55.000 "claimed": true, 00:24:55.000 "claim_type": "exclusive_write", 00:24:55.000 "zoned": false, 00:24:55.000 "supported_io_types": { 00:24:55.000 "read": true, 00:24:55.000 "write": true, 00:24:55.000 "unmap": true, 00:24:55.000 "write_zeroes": true, 00:24:55.000 "flush": true, 00:24:55.000 "reset": true, 00:24:55.000 "compare": false, 00:24:55.000 "compare_and_write": false, 00:24:55.000 "abort": true, 00:24:55.000 "nvme_admin": false, 00:24:55.000 "nvme_io": false 00:24:55.000 }, 00:24:55.000 "memory_domains": [ 00:24:55.000 { 00:24:55.000 "dma_device_id": "system", 00:24:55.000 "dma_device_type": 1 00:24:55.000 }, 00:24:55.000 { 00:24:55.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:55.000 "dma_device_type": 2 00:24:55.000 } 00:24:55.000 ], 00:24:55.000 "driver_specific": {} 00:24:55.000 } 00:24:55.000 ] 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.257 12:06:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:55.257 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:24:55.257 "name": "Existed_Raid", 00:24:55.257 "uuid": "83423f3d-2eb5-486a-a3d0-49eba4fbd2ae", 00:24:55.257 "strip_size_kb": 0, 00:24:55.257 "state": "online", 00:24:55.257 "raid_level": "raid1", 00:24:55.257 "superblock": false, 00:24:55.257 "num_base_bdevs": 4, 00:24:55.257 "num_base_bdevs_discovered": 4, 00:24:55.257 "num_base_bdevs_operational": 4, 00:24:55.257 "base_bdevs_list": [ 00:24:55.257 { 00:24:55.257 "name": "NewBaseBdev", 00:24:55.257 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:55.257 "is_configured": true, 00:24:55.257 "data_offset": 0, 00:24:55.257 "data_size": 65536 00:24:55.257 }, 00:24:55.257 { 00:24:55.257 "name": "BaseBdev2", 00:24:55.257 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:55.257 "is_configured": true, 00:24:55.257 "data_offset": 0, 00:24:55.257 "data_size": 65536 00:24:55.257 }, 00:24:55.257 { 00:24:55.257 "name": "BaseBdev3", 00:24:55.257 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:55.257 "is_configured": true, 00:24:55.257 "data_offset": 0, 00:24:55.257 "data_size": 65536 00:24:55.257 }, 00:24:55.257 { 00:24:55.257 "name": "BaseBdev4", 00:24:55.257 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:55.257 "is_configured": true, 00:24:55.257 "data_offset": 0, 00:24:55.257 "data_size": 65536 00:24:55.257 } 00:24:55.257 ] 00:24:55.257 }' 00:24:55.257 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:24:55.257 12:06:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:24:56.203 12:06:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:24:56.203 [2024-07-21 12:06:54.984211] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:56.203 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:24:56.203 "name": "Existed_Raid", 00:24:56.203 "aliases": [ 00:24:56.203 "83423f3d-2eb5-486a-a3d0-49eba4fbd2ae" 00:24:56.203 ], 00:24:56.203 "product_name": "Raid Volume", 00:24:56.203 "block_size": 512, 00:24:56.203 "num_blocks": 65536, 00:24:56.203 "uuid": "83423f3d-2eb5-486a-a3d0-49eba4fbd2ae", 00:24:56.203 "assigned_rate_limits": { 00:24:56.203 "rw_ios_per_sec": 0, 00:24:56.203 "rw_mbytes_per_sec": 0, 00:24:56.203 "r_mbytes_per_sec": 0, 00:24:56.203 "w_mbytes_per_sec": 0 00:24:56.203 }, 00:24:56.203 "claimed": false, 00:24:56.203 "zoned": false, 00:24:56.203 "supported_io_types": { 00:24:56.203 "read": true, 00:24:56.203 "write": true, 00:24:56.203 "unmap": false, 00:24:56.203 "write_zeroes": true, 00:24:56.203 "flush": false, 00:24:56.203 "reset": true, 00:24:56.203 "compare": false, 00:24:56.203 "compare_and_write": false, 00:24:56.203 "abort": false, 00:24:56.203 "nvme_admin": false, 00:24:56.203 "nvme_io": false 00:24:56.203 }, 00:24:56.203 "memory_domains": [ 00:24:56.203 { 00:24:56.203 "dma_device_id": "system", 00:24:56.203 "dma_device_type": 1 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.203 "dma_device_type": 2 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "dma_device_id": "system", 00:24:56.203 "dma_device_type": 1 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.203 "dma_device_type": 2 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "dma_device_id": "system", 00:24:56.203 "dma_device_type": 1 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.203 "dma_device_type": 2 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "dma_device_id": "system", 00:24:56.203 "dma_device_type": 1 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.203 "dma_device_type": 2 00:24:56.203 } 00:24:56.203 ], 00:24:56.203 "driver_specific": { 00:24:56.203 "raid": { 00:24:56.203 "uuid": "83423f3d-2eb5-486a-a3d0-49eba4fbd2ae", 00:24:56.203 "strip_size_kb": 0, 00:24:56.203 "state": "online", 00:24:56.203 "raid_level": "raid1", 00:24:56.203 "superblock": false, 00:24:56.203 "num_base_bdevs": 4, 00:24:56.203 "num_base_bdevs_discovered": 4, 00:24:56.203 "num_base_bdevs_operational": 4, 00:24:56.203 "base_bdevs_list": [ 00:24:56.203 { 00:24:56.203 "name": "NewBaseBdev", 00:24:56.203 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:56.203 "is_configured": true, 00:24:56.203 "data_offset": 0, 00:24:56.203 "data_size": 65536 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "name": "BaseBdev2", 00:24:56.203 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:56.203 "is_configured": true, 00:24:56.203 "data_offset": 0, 00:24:56.203 "data_size": 65536 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "name": "BaseBdev3", 00:24:56.203 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:56.203 "is_configured": true, 00:24:56.203 "data_offset": 0, 00:24:56.203 "data_size": 65536 00:24:56.203 }, 00:24:56.203 { 00:24:56.203 "name": "BaseBdev4", 00:24:56.203 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:56.203 "is_configured": true, 00:24:56.203 "data_offset": 0, 00:24:56.203 "data_size": 65536 00:24:56.203 } 00:24:56.203 ] 00:24:56.203 } 00:24:56.203 } 00:24:56.203 }' 00:24:56.203 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:24:56.203 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:24:56.203 BaseBdev2 00:24:56.203 BaseBdev3 00:24:56.203 BaseBdev4' 00:24:56.203 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:56.203 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:24:56.203 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:56.461 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:56.461 "name": "NewBaseBdev", 00:24:56.461 "aliases": [ 00:24:56.461 "dbb108ce-d90c-405c-860c-c3cf2e395377" 00:24:56.461 ], 00:24:56.461 "product_name": "Malloc disk", 00:24:56.461 "block_size": 512, 00:24:56.461 "num_blocks": 65536, 00:24:56.461 "uuid": "dbb108ce-d90c-405c-860c-c3cf2e395377", 00:24:56.461 "assigned_rate_limits": { 00:24:56.461 "rw_ios_per_sec": 0, 00:24:56.461 "rw_mbytes_per_sec": 0, 00:24:56.461 "r_mbytes_per_sec": 0, 00:24:56.461 "w_mbytes_per_sec": 0 00:24:56.461 }, 00:24:56.461 "claimed": true, 00:24:56.461 "claim_type": "exclusive_write", 00:24:56.461 "zoned": false, 00:24:56.461 "supported_io_types": { 00:24:56.461 "read": true, 00:24:56.461 "write": true, 00:24:56.461 "unmap": true, 00:24:56.461 "write_zeroes": true, 00:24:56.461 "flush": true, 00:24:56.461 "reset": true, 00:24:56.461 "compare": false, 00:24:56.461 "compare_and_write": false, 00:24:56.461 "abort": true, 00:24:56.461 "nvme_admin": false, 00:24:56.461 "nvme_io": false 00:24:56.461 }, 00:24:56.461 "memory_domains": [ 00:24:56.461 { 00:24:56.461 "dma_device_id": "system", 00:24:56.461 "dma_device_type": 1 00:24:56.461 }, 00:24:56.461 { 00:24:56.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:56.462 "dma_device_type": 2 00:24:56.462 } 00:24:56.462 ], 00:24:56.462 "driver_specific": {} 00:24:56.462 }' 00:24:56.718 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.718 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:56.718 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:56.718 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.718 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:56.718 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:56.718 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:56.974 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:56.975 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:56.975 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:56.975 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:56.975 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:56.975 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:56.975 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:24:56.975 12:06:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:57.232 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:57.232 "name": "BaseBdev2", 00:24:57.232 "aliases": [ 00:24:57.232 "6c465db3-e4ae-4c90-87d2-87720706780a" 00:24:57.232 ], 00:24:57.232 "product_name": "Malloc disk", 00:24:57.232 "block_size": 512, 00:24:57.232 "num_blocks": 65536, 00:24:57.232 "uuid": "6c465db3-e4ae-4c90-87d2-87720706780a", 00:24:57.232 "assigned_rate_limits": { 00:24:57.232 "rw_ios_per_sec": 0, 00:24:57.232 "rw_mbytes_per_sec": 0, 00:24:57.232 "r_mbytes_per_sec": 0, 00:24:57.232 "w_mbytes_per_sec": 0 00:24:57.232 }, 00:24:57.232 "claimed": true, 00:24:57.232 "claim_type": "exclusive_write", 00:24:57.232 "zoned": false, 00:24:57.232 "supported_io_types": { 00:24:57.232 "read": true, 00:24:57.232 "write": true, 00:24:57.232 "unmap": true, 00:24:57.232 "write_zeroes": true, 00:24:57.232 "flush": true, 00:24:57.232 "reset": true, 00:24:57.232 "compare": false, 00:24:57.232 "compare_and_write": false, 00:24:57.232 "abort": true, 00:24:57.232 "nvme_admin": false, 00:24:57.232 "nvme_io": false 00:24:57.232 }, 00:24:57.232 "memory_domains": [ 00:24:57.232 { 00:24:57.232 "dma_device_id": "system", 00:24:57.232 "dma_device_type": 1 00:24:57.232 }, 00:24:57.232 { 00:24:57.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:57.232 "dma_device_type": 2 00:24:57.232 } 00:24:57.232 ], 00:24:57.232 "driver_specific": {} 00:24:57.232 }' 00:24:57.232 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.232 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:57.489 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.745 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:57.745 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:57.745 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:57.745 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:24:57.745 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:58.001 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:58.001 "name": "BaseBdev3", 00:24:58.001 "aliases": [ 00:24:58.001 "d67d9072-5a16-47df-a762-c1fda86f1e75" 00:24:58.001 ], 00:24:58.001 "product_name": "Malloc disk", 00:24:58.001 "block_size": 512, 00:24:58.001 "num_blocks": 65536, 00:24:58.001 "uuid": "d67d9072-5a16-47df-a762-c1fda86f1e75", 00:24:58.001 "assigned_rate_limits": { 00:24:58.001 "rw_ios_per_sec": 0, 00:24:58.001 "rw_mbytes_per_sec": 0, 00:24:58.001 "r_mbytes_per_sec": 0, 00:24:58.001 "w_mbytes_per_sec": 0 00:24:58.001 }, 00:24:58.001 "claimed": true, 00:24:58.001 "claim_type": "exclusive_write", 00:24:58.001 "zoned": false, 00:24:58.001 "supported_io_types": { 00:24:58.001 "read": true, 00:24:58.001 "write": true, 00:24:58.001 "unmap": true, 00:24:58.001 "write_zeroes": true, 00:24:58.001 "flush": true, 00:24:58.001 "reset": true, 00:24:58.001 "compare": false, 00:24:58.001 "compare_and_write": false, 00:24:58.001 "abort": true, 00:24:58.001 "nvme_admin": false, 00:24:58.001 "nvme_io": false 00:24:58.001 }, 00:24:58.001 "memory_domains": [ 00:24:58.001 { 00:24:58.001 "dma_device_id": "system", 00:24:58.001 "dma_device_type": 1 00:24:58.001 }, 00:24:58.001 { 00:24:58.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.001 "dma_device_type": 2 00:24:58.001 } 00:24:58.001 ], 00:24:58.001 "driver_specific": {} 00:24:58.001 }' 00:24:58.001 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.001 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.001 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:58.001 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.001 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.258 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:58.258 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.258 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.258 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:58.258 12:06:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.258 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:58.258 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:58.258 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:24:58.258 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:24:58.258 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:24:58.516 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:24:58.516 "name": "BaseBdev4", 00:24:58.516 "aliases": [ 00:24:58.516 "f2d4f438-0876-4b85-ba96-477a88086442" 00:24:58.516 ], 00:24:58.516 "product_name": "Malloc disk", 00:24:58.516 "block_size": 512, 00:24:58.516 "num_blocks": 65536, 00:24:58.516 "uuid": "f2d4f438-0876-4b85-ba96-477a88086442", 00:24:58.516 "assigned_rate_limits": { 00:24:58.516 "rw_ios_per_sec": 0, 00:24:58.516 "rw_mbytes_per_sec": 0, 00:24:58.516 "r_mbytes_per_sec": 0, 00:24:58.516 "w_mbytes_per_sec": 0 00:24:58.516 }, 00:24:58.516 "claimed": true, 00:24:58.516 "claim_type": "exclusive_write", 00:24:58.516 "zoned": false, 00:24:58.516 "supported_io_types": { 00:24:58.516 "read": true, 00:24:58.516 "write": true, 00:24:58.516 "unmap": true, 00:24:58.516 "write_zeroes": true, 00:24:58.516 "flush": true, 00:24:58.516 "reset": true, 00:24:58.516 "compare": false, 00:24:58.516 "compare_and_write": false, 00:24:58.516 "abort": true, 00:24:58.516 "nvme_admin": false, 00:24:58.516 "nvme_io": false 00:24:58.516 }, 00:24:58.516 "memory_domains": [ 00:24:58.516 { 00:24:58.516 "dma_device_id": "system", 00:24:58.516 "dma_device_type": 1 00:24:58.516 }, 00:24:58.516 { 00:24:58.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:58.516 "dma_device_type": 2 00:24:58.516 } 00:24:58.516 ], 00:24:58.516 "driver_specific": {} 00:24:58.516 }' 00:24:58.516 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.774 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:24:58.774 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:24:58.774 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.774 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:24:58.774 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:24:58.774 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:58.774 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:24:59.031 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:24:59.031 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.031 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:24:59.031 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:24:59.031 12:06:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:59.289 [2024-07-21 12:06:58.012605] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:59.289 [2024-07-21 12:06:58.012651] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:59.289 [2024-07-21 12:06:58.012746] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.289 [2024-07-21 12:06:58.013069] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.289 [2024-07-21 12:06:58.013098] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 150968 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 150968 ']' 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # kill -0 150968 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # uname 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 150968 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 150968' 00:24:59.289 killing process with pid 150968 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@965 -- # kill 150968 00:24:59.289 [2024-07-21 12:06:58.055394] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.289 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # wait 150968 00:24:59.289 [2024-07-21 12:06:58.107826] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:59.546 12:06:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:24:59.546 00:24:59.546 real 0m34.565s 00:24:59.546 user 1m5.854s 00:24:59.546 sys 0m4.058s 00:24:59.546 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:59.546 12:06:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:24:59.546 ************************************ 00:24:59.546 END TEST raid_state_function_test 00:24:59.546 ************************************ 00:24:59.804 12:06:58 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:24:59.804 12:06:58 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:24:59.804 12:06:58 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.804 12:06:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:24:59.804 ************************************ 00:24:59.804 START TEST raid_state_function_test_sb 00:24:59.804 ************************************ 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 4 true 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=152068 00:24:59.804 Process raid pid: 152068 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 152068' 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 152068 /var/tmp/spdk-raid.sock 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 152068 ']' 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:59.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:59.804 12:06:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:24:59.804 [2024-07-21 12:06:58.501275] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:59.804 [2024-07-21 12:06:58.501755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.062 [2024-07-21 12:06:58.671035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.062 [2024-07-21 12:06:58.770913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.062 [2024-07-21 12:06:58.831214] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:00.627 12:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:00.627 12:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:25:00.627 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:00.884 [2024-07-21 12:06:59.709619] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:00.884 [2024-07-21 12:06:59.709981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:00.884 [2024-07-21 12:06:59.710118] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:00.884 [2024-07-21 12:06:59.710184] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:00.884 [2024-07-21 12:06:59.710291] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:00.884 [2024-07-21 12:06:59.710380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:00.884 [2024-07-21 12:06:59.710617] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:00.884 [2024-07-21 12:06:59.710702] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:00.884 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:01.141 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:01.141 "name": "Existed_Raid", 00:25:01.141 "uuid": "7372f34d-21b4-48b6-b96c-176dd60963b3", 00:25:01.141 "strip_size_kb": 0, 00:25:01.141 "state": "configuring", 00:25:01.141 "raid_level": "raid1", 00:25:01.141 "superblock": true, 00:25:01.141 "num_base_bdevs": 4, 00:25:01.141 "num_base_bdevs_discovered": 0, 00:25:01.141 "num_base_bdevs_operational": 4, 00:25:01.141 "base_bdevs_list": [ 00:25:01.141 { 00:25:01.141 "name": "BaseBdev1", 00:25:01.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.141 "is_configured": false, 00:25:01.141 "data_offset": 0, 00:25:01.141 "data_size": 0 00:25:01.141 }, 00:25:01.141 { 00:25:01.141 "name": "BaseBdev2", 00:25:01.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.141 "is_configured": false, 00:25:01.141 "data_offset": 0, 00:25:01.141 "data_size": 0 00:25:01.141 }, 00:25:01.141 { 00:25:01.141 "name": "BaseBdev3", 00:25:01.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.141 "is_configured": false, 00:25:01.141 "data_offset": 0, 00:25:01.141 "data_size": 0 00:25:01.141 }, 00:25:01.141 { 00:25:01.141 "name": "BaseBdev4", 00:25:01.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:01.141 "is_configured": false, 00:25:01.141 "data_offset": 0, 00:25:01.141 "data_size": 0 00:25:01.141 } 00:25:01.141 ] 00:25:01.141 }' 00:25:01.141 12:06:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:01.141 12:06:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:02.071 12:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:02.071 [2024-07-21 12:07:00.853710] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:02.071 [2024-07-21 12:07:00.854022] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:25:02.071 12:07:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:02.329 [2024-07-21 12:07:01.125757] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:02.329 [2024-07-21 12:07:01.126153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:02.329 [2024-07-21 12:07:01.126275] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:02.329 [2024-07-21 12:07:01.126386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:02.329 [2024-07-21 12:07:01.126628] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:02.329 [2024-07-21 12:07:01.126699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:02.329 [2024-07-21 12:07:01.126888] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:02.329 [2024-07-21 12:07:01.126958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:02.329 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:02.586 [2024-07-21 12:07:01.373232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:02.586 BaseBdev1 00:25:02.587 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:25:02.587 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:02.587 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:02.587 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:02.587 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:02.587 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:02.587 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:02.844 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:03.102 [ 00:25:03.102 { 00:25:03.102 "name": "BaseBdev1", 00:25:03.102 "aliases": [ 00:25:03.102 "9f3a46da-7b17-4352-8480-0b9f606f2909" 00:25:03.102 ], 00:25:03.102 "product_name": "Malloc disk", 00:25:03.102 "block_size": 512, 00:25:03.102 "num_blocks": 65536, 00:25:03.102 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:03.102 "assigned_rate_limits": { 00:25:03.102 "rw_ios_per_sec": 0, 00:25:03.102 "rw_mbytes_per_sec": 0, 00:25:03.102 "r_mbytes_per_sec": 0, 00:25:03.102 "w_mbytes_per_sec": 0 00:25:03.102 }, 00:25:03.102 "claimed": true, 00:25:03.103 "claim_type": "exclusive_write", 00:25:03.103 "zoned": false, 00:25:03.103 "supported_io_types": { 00:25:03.103 "read": true, 00:25:03.103 "write": true, 00:25:03.103 "unmap": true, 00:25:03.103 "write_zeroes": true, 00:25:03.103 "flush": true, 00:25:03.103 "reset": true, 00:25:03.103 "compare": false, 00:25:03.103 "compare_and_write": false, 00:25:03.103 "abort": true, 00:25:03.103 "nvme_admin": false, 00:25:03.103 "nvme_io": false 00:25:03.103 }, 00:25:03.103 "memory_domains": [ 00:25:03.103 { 00:25:03.103 "dma_device_id": "system", 00:25:03.103 "dma_device_type": 1 00:25:03.103 }, 00:25:03.103 { 00:25:03.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:03.103 "dma_device_type": 2 00:25:03.103 } 00:25:03.103 ], 00:25:03.103 "driver_specific": {} 00:25:03.103 } 00:25:03.103 ] 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.103 12:07:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:03.361 12:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:03.361 "name": "Existed_Raid", 00:25:03.361 "uuid": "a5780a35-adc3-4509-bfda-a5e0e942d571", 00:25:03.361 "strip_size_kb": 0, 00:25:03.361 "state": "configuring", 00:25:03.361 "raid_level": "raid1", 00:25:03.361 "superblock": true, 00:25:03.361 "num_base_bdevs": 4, 00:25:03.361 "num_base_bdevs_discovered": 1, 00:25:03.361 "num_base_bdevs_operational": 4, 00:25:03.361 "base_bdevs_list": [ 00:25:03.361 { 00:25:03.361 "name": "BaseBdev1", 00:25:03.361 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:03.361 "is_configured": true, 00:25:03.361 "data_offset": 2048, 00:25:03.361 "data_size": 63488 00:25:03.361 }, 00:25:03.361 { 00:25:03.361 "name": "BaseBdev2", 00:25:03.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.361 "is_configured": false, 00:25:03.361 "data_offset": 0, 00:25:03.361 "data_size": 0 00:25:03.361 }, 00:25:03.361 { 00:25:03.361 "name": "BaseBdev3", 00:25:03.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.361 "is_configured": false, 00:25:03.361 "data_offset": 0, 00:25:03.361 "data_size": 0 00:25:03.361 }, 00:25:03.361 { 00:25:03.361 "name": "BaseBdev4", 00:25:03.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:03.361 "is_configured": false, 00:25:03.361 "data_offset": 0, 00:25:03.361 "data_size": 0 00:25:03.361 } 00:25:03.361 ] 00:25:03.361 }' 00:25:03.361 12:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:03.361 12:07:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:03.938 12:07:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:04.503 [2024-07-21 12:07:03.077670] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:04.503 [2024-07-21 12:07:03.077949] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:25:04.503 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:04.503 [2024-07-21 12:07:03.369815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:04.760 [2024-07-21 12:07:03.372333] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:25:04.760 [2024-07-21 12:07:03.372580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:25:04.760 [2024-07-21 12:07:03.372702] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:25:04.760 [2024-07-21 12:07:03.372774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:25:04.760 [2024-07-21 12:07:03.372881] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:25:04.760 [2024-07-21 12:07:03.372944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.760 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:05.017 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:05.017 "name": "Existed_Raid", 00:25:05.017 "uuid": "ce509845-6040-42bc-8a74-be373dd145f4", 00:25:05.017 "strip_size_kb": 0, 00:25:05.017 "state": "configuring", 00:25:05.017 "raid_level": "raid1", 00:25:05.017 "superblock": true, 00:25:05.017 "num_base_bdevs": 4, 00:25:05.017 "num_base_bdevs_discovered": 1, 00:25:05.017 "num_base_bdevs_operational": 4, 00:25:05.017 "base_bdevs_list": [ 00:25:05.017 { 00:25:05.017 "name": "BaseBdev1", 00:25:05.017 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:05.017 "is_configured": true, 00:25:05.017 "data_offset": 2048, 00:25:05.017 "data_size": 63488 00:25:05.017 }, 00:25:05.017 { 00:25:05.017 "name": "BaseBdev2", 00:25:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.017 "is_configured": false, 00:25:05.017 "data_offset": 0, 00:25:05.017 "data_size": 0 00:25:05.017 }, 00:25:05.017 { 00:25:05.017 "name": "BaseBdev3", 00:25:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.017 "is_configured": false, 00:25:05.017 "data_offset": 0, 00:25:05.017 "data_size": 0 00:25:05.017 }, 00:25:05.017 { 00:25:05.017 "name": "BaseBdev4", 00:25:05.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:05.017 "is_configured": false, 00:25:05.017 "data_offset": 0, 00:25:05.017 "data_size": 0 00:25:05.017 } 00:25:05.017 ] 00:25:05.017 }' 00:25:05.017 12:07:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:05.017 12:07:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:05.581 12:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:05.839 [2024-07-21 12:07:04.619189] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:05.839 BaseBdev2 00:25:05.839 12:07:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:25:05.839 12:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:05.839 12:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:05.839 12:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:05.839 12:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:05.839 12:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:05.839 12:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:06.095 12:07:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:06.352 [ 00:25:06.352 { 00:25:06.352 "name": "BaseBdev2", 00:25:06.352 "aliases": [ 00:25:06.352 "781e56a5-b14e-4f77-bdf5-43b70cacb214" 00:25:06.352 ], 00:25:06.352 "product_name": "Malloc disk", 00:25:06.352 "block_size": 512, 00:25:06.352 "num_blocks": 65536, 00:25:06.352 "uuid": "781e56a5-b14e-4f77-bdf5-43b70cacb214", 00:25:06.352 "assigned_rate_limits": { 00:25:06.352 "rw_ios_per_sec": 0, 00:25:06.352 "rw_mbytes_per_sec": 0, 00:25:06.352 "r_mbytes_per_sec": 0, 00:25:06.352 "w_mbytes_per_sec": 0 00:25:06.352 }, 00:25:06.352 "claimed": true, 00:25:06.352 "claim_type": "exclusive_write", 00:25:06.352 "zoned": false, 00:25:06.352 "supported_io_types": { 00:25:06.352 "read": true, 00:25:06.352 "write": true, 00:25:06.352 "unmap": true, 00:25:06.352 "write_zeroes": true, 00:25:06.352 "flush": true, 00:25:06.352 "reset": true, 00:25:06.352 "compare": false, 00:25:06.352 "compare_and_write": false, 00:25:06.352 "abort": true, 00:25:06.352 "nvme_admin": false, 00:25:06.352 "nvme_io": false 00:25:06.352 }, 00:25:06.352 "memory_domains": [ 00:25:06.352 { 00:25:06.352 "dma_device_id": "system", 00:25:06.352 "dma_device_type": 1 00:25:06.352 }, 00:25:06.352 { 00:25:06.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:06.352 "dma_device_type": 2 00:25:06.352 } 00:25:06.352 ], 00:25:06.352 "driver_specific": {} 00:25:06.352 } 00:25:06.352 ] 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:06.352 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:06.610 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:06.610 "name": "Existed_Raid", 00:25:06.610 "uuid": "ce509845-6040-42bc-8a74-be373dd145f4", 00:25:06.610 "strip_size_kb": 0, 00:25:06.610 "state": "configuring", 00:25:06.610 "raid_level": "raid1", 00:25:06.610 "superblock": true, 00:25:06.610 "num_base_bdevs": 4, 00:25:06.610 "num_base_bdevs_discovered": 2, 00:25:06.610 "num_base_bdevs_operational": 4, 00:25:06.610 "base_bdevs_list": [ 00:25:06.610 { 00:25:06.610 "name": "BaseBdev1", 00:25:06.610 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:06.610 "is_configured": true, 00:25:06.610 "data_offset": 2048, 00:25:06.610 "data_size": 63488 00:25:06.610 }, 00:25:06.610 { 00:25:06.610 "name": "BaseBdev2", 00:25:06.610 "uuid": "781e56a5-b14e-4f77-bdf5-43b70cacb214", 00:25:06.610 "is_configured": true, 00:25:06.610 "data_offset": 2048, 00:25:06.610 "data_size": 63488 00:25:06.610 }, 00:25:06.610 { 00:25:06.610 "name": "BaseBdev3", 00:25:06.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.610 "is_configured": false, 00:25:06.610 "data_offset": 0, 00:25:06.610 "data_size": 0 00:25:06.610 }, 00:25:06.610 { 00:25:06.610 "name": "BaseBdev4", 00:25:06.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.610 "is_configured": false, 00:25:06.610 "data_offset": 0, 00:25:06.610 "data_size": 0 00:25:06.610 } 00:25:06.610 ] 00:25:06.610 }' 00:25:06.610 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:06.610 12:07:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:07.174 12:07:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:07.431 [2024-07-21 12:07:06.256594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:07.431 BaseBdev3 00:25:07.431 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:25:07.431 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:07.431 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:07.431 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:07.431 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:07.431 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:07.431 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:07.701 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:07.958 [ 00:25:07.958 { 00:25:07.958 "name": "BaseBdev3", 00:25:07.958 "aliases": [ 00:25:07.958 "34d0aa99-e52e-4dda-8755-19e0a8295be6" 00:25:07.958 ], 00:25:07.958 "product_name": "Malloc disk", 00:25:07.958 "block_size": 512, 00:25:07.958 "num_blocks": 65536, 00:25:07.958 "uuid": "34d0aa99-e52e-4dda-8755-19e0a8295be6", 00:25:07.958 "assigned_rate_limits": { 00:25:07.958 "rw_ios_per_sec": 0, 00:25:07.958 "rw_mbytes_per_sec": 0, 00:25:07.958 "r_mbytes_per_sec": 0, 00:25:07.958 "w_mbytes_per_sec": 0 00:25:07.958 }, 00:25:07.958 "claimed": true, 00:25:07.958 "claim_type": "exclusive_write", 00:25:07.958 "zoned": false, 00:25:07.958 "supported_io_types": { 00:25:07.958 "read": true, 00:25:07.958 "write": true, 00:25:07.958 "unmap": true, 00:25:07.958 "write_zeroes": true, 00:25:07.958 "flush": true, 00:25:07.958 "reset": true, 00:25:07.958 "compare": false, 00:25:07.958 "compare_and_write": false, 00:25:07.958 "abort": true, 00:25:07.958 "nvme_admin": false, 00:25:07.958 "nvme_io": false 00:25:07.958 }, 00:25:07.958 "memory_domains": [ 00:25:07.958 { 00:25:07.958 "dma_device_id": "system", 00:25:07.958 "dma_device_type": 1 00:25:07.958 }, 00:25:07.958 { 00:25:07.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:07.958 "dma_device_type": 2 00:25:07.958 } 00:25:07.958 ], 00:25:07.958 "driver_specific": {} 00:25:07.958 } 00:25:07.958 ] 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.958 12:07:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:08.216 12:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:08.216 "name": "Existed_Raid", 00:25:08.216 "uuid": "ce509845-6040-42bc-8a74-be373dd145f4", 00:25:08.216 "strip_size_kb": 0, 00:25:08.216 "state": "configuring", 00:25:08.216 "raid_level": "raid1", 00:25:08.216 "superblock": true, 00:25:08.216 "num_base_bdevs": 4, 00:25:08.216 "num_base_bdevs_discovered": 3, 00:25:08.216 "num_base_bdevs_operational": 4, 00:25:08.216 "base_bdevs_list": [ 00:25:08.216 { 00:25:08.216 "name": "BaseBdev1", 00:25:08.216 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:08.216 "is_configured": true, 00:25:08.216 "data_offset": 2048, 00:25:08.216 "data_size": 63488 00:25:08.216 }, 00:25:08.216 { 00:25:08.216 "name": "BaseBdev2", 00:25:08.216 "uuid": "781e56a5-b14e-4f77-bdf5-43b70cacb214", 00:25:08.216 "is_configured": true, 00:25:08.216 "data_offset": 2048, 00:25:08.216 "data_size": 63488 00:25:08.216 }, 00:25:08.216 { 00:25:08.216 "name": "BaseBdev3", 00:25:08.216 "uuid": "34d0aa99-e52e-4dda-8755-19e0a8295be6", 00:25:08.216 "is_configured": true, 00:25:08.216 "data_offset": 2048, 00:25:08.216 "data_size": 63488 00:25:08.216 }, 00:25:08.216 { 00:25:08.216 "name": "BaseBdev4", 00:25:08.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.216 "is_configured": false, 00:25:08.216 "data_offset": 0, 00:25:08.216 "data_size": 0 00:25:08.216 } 00:25:08.216 ] 00:25:08.216 }' 00:25:08.216 12:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:08.216 12:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:09.150 [2024-07-21 12:07:07.917983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:09.150 [2024-07-21 12:07:07.918628] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:25:09.150 [2024-07-21 12:07:07.918770] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:09.150 [2024-07-21 12:07:07.918978] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:25:09.150 [2024-07-21 12:07:07.919530] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:25:09.150 BaseBdev4 00:25:09.150 [2024-07-21 12:07:07.919712] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:25:09.150 [2024-07-21 12:07:07.919991] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:09.150 12:07:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:09.408 12:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:09.666 [ 00:25:09.666 { 00:25:09.666 "name": "BaseBdev4", 00:25:09.666 "aliases": [ 00:25:09.666 "9605484b-fc66-4fd5-bee7-c1f819aff55a" 00:25:09.666 ], 00:25:09.666 "product_name": "Malloc disk", 00:25:09.666 "block_size": 512, 00:25:09.666 "num_blocks": 65536, 00:25:09.666 "uuid": "9605484b-fc66-4fd5-bee7-c1f819aff55a", 00:25:09.666 "assigned_rate_limits": { 00:25:09.666 "rw_ios_per_sec": 0, 00:25:09.666 "rw_mbytes_per_sec": 0, 00:25:09.666 "r_mbytes_per_sec": 0, 00:25:09.666 "w_mbytes_per_sec": 0 00:25:09.666 }, 00:25:09.666 "claimed": true, 00:25:09.666 "claim_type": "exclusive_write", 00:25:09.666 "zoned": false, 00:25:09.666 "supported_io_types": { 00:25:09.666 "read": true, 00:25:09.666 "write": true, 00:25:09.666 "unmap": true, 00:25:09.667 "write_zeroes": true, 00:25:09.667 "flush": true, 00:25:09.667 "reset": true, 00:25:09.667 "compare": false, 00:25:09.667 "compare_and_write": false, 00:25:09.667 "abort": true, 00:25:09.667 "nvme_admin": false, 00:25:09.667 "nvme_io": false 00:25:09.667 }, 00:25:09.667 "memory_domains": [ 00:25:09.667 { 00:25:09.667 "dma_device_id": "system", 00:25:09.667 "dma_device_type": 1 00:25:09.667 }, 00:25:09.667 { 00:25:09.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:09.667 "dma_device_type": 2 00:25:09.667 } 00:25:09.667 ], 00:25:09.667 "driver_specific": {} 00:25:09.667 } 00:25:09.667 ] 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:09.667 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.924 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:09.924 "name": "Existed_Raid", 00:25:09.924 "uuid": "ce509845-6040-42bc-8a74-be373dd145f4", 00:25:09.924 "strip_size_kb": 0, 00:25:09.924 "state": "online", 00:25:09.924 "raid_level": "raid1", 00:25:09.924 "superblock": true, 00:25:09.924 "num_base_bdevs": 4, 00:25:09.924 "num_base_bdevs_discovered": 4, 00:25:09.924 "num_base_bdevs_operational": 4, 00:25:09.924 "base_bdevs_list": [ 00:25:09.924 { 00:25:09.924 "name": "BaseBdev1", 00:25:09.924 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:09.924 "is_configured": true, 00:25:09.925 "data_offset": 2048, 00:25:09.925 "data_size": 63488 00:25:09.925 }, 00:25:09.925 { 00:25:09.925 "name": "BaseBdev2", 00:25:09.925 "uuid": "781e56a5-b14e-4f77-bdf5-43b70cacb214", 00:25:09.925 "is_configured": true, 00:25:09.925 "data_offset": 2048, 00:25:09.925 "data_size": 63488 00:25:09.925 }, 00:25:09.925 { 00:25:09.925 "name": "BaseBdev3", 00:25:09.925 "uuid": "34d0aa99-e52e-4dda-8755-19e0a8295be6", 00:25:09.925 "is_configured": true, 00:25:09.925 "data_offset": 2048, 00:25:09.925 "data_size": 63488 00:25:09.925 }, 00:25:09.925 { 00:25:09.925 "name": "BaseBdev4", 00:25:09.925 "uuid": "9605484b-fc66-4fd5-bee7-c1f819aff55a", 00:25:09.925 "is_configured": true, 00:25:09.925 "data_offset": 2048, 00:25:09.925 "data_size": 63488 00:25:09.925 } 00:25:09.925 ] 00:25:09.925 }' 00:25:09.925 12:07:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:09.925 12:07:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:10.489 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:25:10.489 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:10.489 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:10.489 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:10.746 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:10.746 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:10.746 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:10.746 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:10.746 [2024-07-21 12:07:09.575772] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:10.746 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:10.746 "name": "Existed_Raid", 00:25:10.746 "aliases": [ 00:25:10.746 "ce509845-6040-42bc-8a74-be373dd145f4" 00:25:10.746 ], 00:25:10.746 "product_name": "Raid Volume", 00:25:10.746 "block_size": 512, 00:25:10.746 "num_blocks": 63488, 00:25:10.746 "uuid": "ce509845-6040-42bc-8a74-be373dd145f4", 00:25:10.746 "assigned_rate_limits": { 00:25:10.746 "rw_ios_per_sec": 0, 00:25:10.746 "rw_mbytes_per_sec": 0, 00:25:10.746 "r_mbytes_per_sec": 0, 00:25:10.746 "w_mbytes_per_sec": 0 00:25:10.746 }, 00:25:10.746 "claimed": false, 00:25:10.746 "zoned": false, 00:25:10.746 "supported_io_types": { 00:25:10.746 "read": true, 00:25:10.746 "write": true, 00:25:10.746 "unmap": false, 00:25:10.746 "write_zeroes": true, 00:25:10.746 "flush": false, 00:25:10.746 "reset": true, 00:25:10.746 "compare": false, 00:25:10.746 "compare_and_write": false, 00:25:10.746 "abort": false, 00:25:10.746 "nvme_admin": false, 00:25:10.746 "nvme_io": false 00:25:10.746 }, 00:25:10.746 "memory_domains": [ 00:25:10.746 { 00:25:10.746 "dma_device_id": "system", 00:25:10.746 "dma_device_type": 1 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.746 "dma_device_type": 2 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "dma_device_id": "system", 00:25:10.746 "dma_device_type": 1 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.746 "dma_device_type": 2 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "dma_device_id": "system", 00:25:10.746 "dma_device_type": 1 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.746 "dma_device_type": 2 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "dma_device_id": "system", 00:25:10.746 "dma_device_type": 1 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:10.746 "dma_device_type": 2 00:25:10.746 } 00:25:10.746 ], 00:25:10.746 "driver_specific": { 00:25:10.746 "raid": { 00:25:10.746 "uuid": "ce509845-6040-42bc-8a74-be373dd145f4", 00:25:10.746 "strip_size_kb": 0, 00:25:10.746 "state": "online", 00:25:10.746 "raid_level": "raid1", 00:25:10.746 "superblock": true, 00:25:10.746 "num_base_bdevs": 4, 00:25:10.746 "num_base_bdevs_discovered": 4, 00:25:10.746 "num_base_bdevs_operational": 4, 00:25:10.746 "base_bdevs_list": [ 00:25:10.746 { 00:25:10.746 "name": "BaseBdev1", 00:25:10.746 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:10.746 "is_configured": true, 00:25:10.746 "data_offset": 2048, 00:25:10.746 "data_size": 63488 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "name": "BaseBdev2", 00:25:10.746 "uuid": "781e56a5-b14e-4f77-bdf5-43b70cacb214", 00:25:10.746 "is_configured": true, 00:25:10.746 "data_offset": 2048, 00:25:10.746 "data_size": 63488 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "name": "BaseBdev3", 00:25:10.746 "uuid": "34d0aa99-e52e-4dda-8755-19e0a8295be6", 00:25:10.746 "is_configured": true, 00:25:10.746 "data_offset": 2048, 00:25:10.746 "data_size": 63488 00:25:10.746 }, 00:25:10.746 { 00:25:10.746 "name": "BaseBdev4", 00:25:10.746 "uuid": "9605484b-fc66-4fd5-bee7-c1f819aff55a", 00:25:10.746 "is_configured": true, 00:25:10.746 "data_offset": 2048, 00:25:10.746 "data_size": 63488 00:25:10.746 } 00:25:10.746 ] 00:25:10.746 } 00:25:10.746 } 00:25:10.746 }' 00:25:10.746 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:11.003 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:25:11.003 BaseBdev2 00:25:11.003 BaseBdev3 00:25:11.003 BaseBdev4' 00:25:11.003 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.003 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:25:11.003 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:11.273 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:11.273 "name": "BaseBdev1", 00:25:11.273 "aliases": [ 00:25:11.273 "9f3a46da-7b17-4352-8480-0b9f606f2909" 00:25:11.273 ], 00:25:11.273 "product_name": "Malloc disk", 00:25:11.273 "block_size": 512, 00:25:11.273 "num_blocks": 65536, 00:25:11.273 "uuid": "9f3a46da-7b17-4352-8480-0b9f606f2909", 00:25:11.273 "assigned_rate_limits": { 00:25:11.273 "rw_ios_per_sec": 0, 00:25:11.273 "rw_mbytes_per_sec": 0, 00:25:11.273 "r_mbytes_per_sec": 0, 00:25:11.273 "w_mbytes_per_sec": 0 00:25:11.273 }, 00:25:11.273 "claimed": true, 00:25:11.273 "claim_type": "exclusive_write", 00:25:11.273 "zoned": false, 00:25:11.273 "supported_io_types": { 00:25:11.273 "read": true, 00:25:11.273 "write": true, 00:25:11.273 "unmap": true, 00:25:11.273 "write_zeroes": true, 00:25:11.273 "flush": true, 00:25:11.273 "reset": true, 00:25:11.273 "compare": false, 00:25:11.273 "compare_and_write": false, 00:25:11.273 "abort": true, 00:25:11.273 "nvme_admin": false, 00:25:11.273 "nvme_io": false 00:25:11.273 }, 00:25:11.273 "memory_domains": [ 00:25:11.273 { 00:25:11.273 "dma_device_id": "system", 00:25:11.273 "dma_device_type": 1 00:25:11.274 }, 00:25:11.274 { 00:25:11.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.274 "dma_device_type": 2 00:25:11.274 } 00:25:11.274 ], 00:25:11.274 "driver_specific": {} 00:25:11.274 }' 00:25:11.274 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.274 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.274 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:11.274 12:07:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.274 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:11.274 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:11.274 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.274 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:11.531 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:11.531 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.531 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:11.531 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:11.531 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:11.531 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:11.531 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:11.789 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:11.789 "name": "BaseBdev2", 00:25:11.789 "aliases": [ 00:25:11.789 "781e56a5-b14e-4f77-bdf5-43b70cacb214" 00:25:11.789 ], 00:25:11.789 "product_name": "Malloc disk", 00:25:11.789 "block_size": 512, 00:25:11.789 "num_blocks": 65536, 00:25:11.789 "uuid": "781e56a5-b14e-4f77-bdf5-43b70cacb214", 00:25:11.789 "assigned_rate_limits": { 00:25:11.789 "rw_ios_per_sec": 0, 00:25:11.789 "rw_mbytes_per_sec": 0, 00:25:11.789 "r_mbytes_per_sec": 0, 00:25:11.789 "w_mbytes_per_sec": 0 00:25:11.789 }, 00:25:11.789 "claimed": true, 00:25:11.789 "claim_type": "exclusive_write", 00:25:11.789 "zoned": false, 00:25:11.789 "supported_io_types": { 00:25:11.789 "read": true, 00:25:11.789 "write": true, 00:25:11.789 "unmap": true, 00:25:11.789 "write_zeroes": true, 00:25:11.789 "flush": true, 00:25:11.789 "reset": true, 00:25:11.789 "compare": false, 00:25:11.789 "compare_and_write": false, 00:25:11.789 "abort": true, 00:25:11.789 "nvme_admin": false, 00:25:11.789 "nvme_io": false 00:25:11.789 }, 00:25:11.789 "memory_domains": [ 00:25:11.789 { 00:25:11.789 "dma_device_id": "system", 00:25:11.789 "dma_device_type": 1 00:25:11.789 }, 00:25:11.789 { 00:25:11.789 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:11.789 "dma_device_type": 2 00:25:11.789 } 00:25:11.789 ], 00:25:11.789 "driver_specific": {} 00:25:11.789 }' 00:25:11.789 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:11.789 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.047 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.304 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.304 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.304 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:12.304 12:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:12.562 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:12.562 "name": "BaseBdev3", 00:25:12.562 "aliases": [ 00:25:12.562 "34d0aa99-e52e-4dda-8755-19e0a8295be6" 00:25:12.562 ], 00:25:12.562 "product_name": "Malloc disk", 00:25:12.562 "block_size": 512, 00:25:12.562 "num_blocks": 65536, 00:25:12.562 "uuid": "34d0aa99-e52e-4dda-8755-19e0a8295be6", 00:25:12.562 "assigned_rate_limits": { 00:25:12.562 "rw_ios_per_sec": 0, 00:25:12.562 "rw_mbytes_per_sec": 0, 00:25:12.562 "r_mbytes_per_sec": 0, 00:25:12.562 "w_mbytes_per_sec": 0 00:25:12.562 }, 00:25:12.562 "claimed": true, 00:25:12.562 "claim_type": "exclusive_write", 00:25:12.562 "zoned": false, 00:25:12.562 "supported_io_types": { 00:25:12.562 "read": true, 00:25:12.562 "write": true, 00:25:12.562 "unmap": true, 00:25:12.562 "write_zeroes": true, 00:25:12.562 "flush": true, 00:25:12.562 "reset": true, 00:25:12.562 "compare": false, 00:25:12.562 "compare_and_write": false, 00:25:12.562 "abort": true, 00:25:12.562 "nvme_admin": false, 00:25:12.562 "nvme_io": false 00:25:12.562 }, 00:25:12.562 "memory_domains": [ 00:25:12.562 { 00:25:12.562 "dma_device_id": "system", 00:25:12.562 "dma_device_type": 1 00:25:12.562 }, 00:25:12.562 { 00:25:12.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:12.562 "dma_device_type": 2 00:25:12.562 } 00:25:12.562 ], 00:25:12.562 "driver_specific": {} 00:25:12.562 }' 00:25:12.562 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.562 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:12.562 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:12.562 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.562 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:12.819 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:13.077 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:13.077 "name": "BaseBdev4", 00:25:13.077 "aliases": [ 00:25:13.077 "9605484b-fc66-4fd5-bee7-c1f819aff55a" 00:25:13.077 ], 00:25:13.077 "product_name": "Malloc disk", 00:25:13.077 "block_size": 512, 00:25:13.077 "num_blocks": 65536, 00:25:13.077 "uuid": "9605484b-fc66-4fd5-bee7-c1f819aff55a", 00:25:13.077 "assigned_rate_limits": { 00:25:13.077 "rw_ios_per_sec": 0, 00:25:13.077 "rw_mbytes_per_sec": 0, 00:25:13.077 "r_mbytes_per_sec": 0, 00:25:13.077 "w_mbytes_per_sec": 0 00:25:13.077 }, 00:25:13.077 "claimed": true, 00:25:13.077 "claim_type": "exclusive_write", 00:25:13.077 "zoned": false, 00:25:13.077 "supported_io_types": { 00:25:13.077 "read": true, 00:25:13.077 "write": true, 00:25:13.077 "unmap": true, 00:25:13.077 "write_zeroes": true, 00:25:13.077 "flush": true, 00:25:13.077 "reset": true, 00:25:13.077 "compare": false, 00:25:13.077 "compare_and_write": false, 00:25:13.077 "abort": true, 00:25:13.077 "nvme_admin": false, 00:25:13.077 "nvme_io": false 00:25:13.077 }, 00:25:13.077 "memory_domains": [ 00:25:13.077 { 00:25:13.077 "dma_device_id": "system", 00:25:13.077 "dma_device_type": 1 00:25:13.077 }, 00:25:13.077 { 00:25:13.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:13.077 "dma_device_type": 2 00:25:13.077 } 00:25:13.077 ], 00:25:13.077 "driver_specific": {} 00:25:13.077 }' 00:25:13.077 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.077 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:13.336 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:13.336 12:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.336 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:13.336 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:13.336 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.336 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:13.336 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:13.336 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.594 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:13.594 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:13.594 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:13.852 [2024-07-21 12:07:12.519693] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.852 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:14.109 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:14.109 "name": "Existed_Raid", 00:25:14.109 "uuid": "ce509845-6040-42bc-8a74-be373dd145f4", 00:25:14.109 "strip_size_kb": 0, 00:25:14.109 "state": "online", 00:25:14.109 "raid_level": "raid1", 00:25:14.109 "superblock": true, 00:25:14.109 "num_base_bdevs": 4, 00:25:14.109 "num_base_bdevs_discovered": 3, 00:25:14.109 "num_base_bdevs_operational": 3, 00:25:14.109 "base_bdevs_list": [ 00:25:14.109 { 00:25:14.109 "name": null, 00:25:14.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:14.109 "is_configured": false, 00:25:14.109 "data_offset": 2048, 00:25:14.109 "data_size": 63488 00:25:14.109 }, 00:25:14.109 { 00:25:14.109 "name": "BaseBdev2", 00:25:14.109 "uuid": "781e56a5-b14e-4f77-bdf5-43b70cacb214", 00:25:14.109 "is_configured": true, 00:25:14.109 "data_offset": 2048, 00:25:14.109 "data_size": 63488 00:25:14.109 }, 00:25:14.109 { 00:25:14.109 "name": "BaseBdev3", 00:25:14.109 "uuid": "34d0aa99-e52e-4dda-8755-19e0a8295be6", 00:25:14.109 "is_configured": true, 00:25:14.109 "data_offset": 2048, 00:25:14.109 "data_size": 63488 00:25:14.109 }, 00:25:14.109 { 00:25:14.109 "name": "BaseBdev4", 00:25:14.109 "uuid": "9605484b-fc66-4fd5-bee7-c1f819aff55a", 00:25:14.109 "is_configured": true, 00:25:14.109 "data_offset": 2048, 00:25:14.109 "data_size": 63488 00:25:14.109 } 00:25:14.109 ] 00:25:14.109 }' 00:25:14.109 12:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:14.109 12:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:14.672 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:25:14.672 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:14.672 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:14.672 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:14.929 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:14.929 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:14.929 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:25:15.185 [2024-07-21 12:07:13.936721] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:15.185 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:15.185 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.185 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.185 12:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:15.442 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:15.442 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.442 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:25:15.698 [2024-07-21 12:07:14.453437] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:15.698 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:15.698 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:15.698 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.698 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:25:15.956 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:25:15.956 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:25:15.956 12:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:25:16.213 [2024-07-21 12:07:15.011170] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:25:16.213 [2024-07-21 12:07:15.011616] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:16.213 [2024-07-21 12:07:15.025667] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:16.213 [2024-07-21 12:07:15.026026] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:16.213 [2024-07-21 12:07:15.026151] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:25:16.213 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:25:16.213 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:25:16.213 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.213 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:25:16.471 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:25:16.471 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:25:16.471 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:25:16.471 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:25:16.471 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:16.471 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:16.728 BaseBdev2 00:25:16.728 12:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:25:16.728 12:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:25:16.728 12:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:16.728 12:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:16.728 12:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:16.728 12:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:16.728 12:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:16.986 12:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:25:17.243 [ 00:25:17.243 { 00:25:17.243 "name": "BaseBdev2", 00:25:17.243 "aliases": [ 00:25:17.243 "79c01531-5aff-472d-b1bb-b08ff6e0383d" 00:25:17.243 ], 00:25:17.243 "product_name": "Malloc disk", 00:25:17.243 "block_size": 512, 00:25:17.243 "num_blocks": 65536, 00:25:17.243 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:17.243 "assigned_rate_limits": { 00:25:17.243 "rw_ios_per_sec": 0, 00:25:17.243 "rw_mbytes_per_sec": 0, 00:25:17.243 "r_mbytes_per_sec": 0, 00:25:17.243 "w_mbytes_per_sec": 0 00:25:17.243 }, 00:25:17.243 "claimed": false, 00:25:17.243 "zoned": false, 00:25:17.243 "supported_io_types": { 00:25:17.243 "read": true, 00:25:17.243 "write": true, 00:25:17.243 "unmap": true, 00:25:17.243 "write_zeroes": true, 00:25:17.243 "flush": true, 00:25:17.243 "reset": true, 00:25:17.243 "compare": false, 00:25:17.243 "compare_and_write": false, 00:25:17.243 "abort": true, 00:25:17.243 "nvme_admin": false, 00:25:17.243 "nvme_io": false 00:25:17.243 }, 00:25:17.244 "memory_domains": [ 00:25:17.244 { 00:25:17.244 "dma_device_id": "system", 00:25:17.244 "dma_device_type": 1 00:25:17.244 }, 00:25:17.244 { 00:25:17.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:17.244 "dma_device_type": 2 00:25:17.244 } 00:25:17.244 ], 00:25:17.244 "driver_specific": {} 00:25:17.244 } 00:25:17.244 ] 00:25:17.244 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:17.244 12:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:17.244 12:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:17.244 12:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:17.501 BaseBdev3 00:25:17.501 12:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:25:17.501 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:25:17.501 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:17.501 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:17.501 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:17.501 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:17.501 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:17.758 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:25:18.016 [ 00:25:18.016 { 00:25:18.016 "name": "BaseBdev3", 00:25:18.016 "aliases": [ 00:25:18.016 "58e5baa4-4292-47ca-88e2-b322a99786b2" 00:25:18.016 ], 00:25:18.016 "product_name": "Malloc disk", 00:25:18.016 "block_size": 512, 00:25:18.016 "num_blocks": 65536, 00:25:18.016 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:18.016 "assigned_rate_limits": { 00:25:18.016 "rw_ios_per_sec": 0, 00:25:18.016 "rw_mbytes_per_sec": 0, 00:25:18.016 "r_mbytes_per_sec": 0, 00:25:18.016 "w_mbytes_per_sec": 0 00:25:18.016 }, 00:25:18.016 "claimed": false, 00:25:18.016 "zoned": false, 00:25:18.016 "supported_io_types": { 00:25:18.016 "read": true, 00:25:18.016 "write": true, 00:25:18.016 "unmap": true, 00:25:18.016 "write_zeroes": true, 00:25:18.016 "flush": true, 00:25:18.016 "reset": true, 00:25:18.016 "compare": false, 00:25:18.016 "compare_and_write": false, 00:25:18.016 "abort": true, 00:25:18.016 "nvme_admin": false, 00:25:18.016 "nvme_io": false 00:25:18.016 }, 00:25:18.016 "memory_domains": [ 00:25:18.016 { 00:25:18.016 "dma_device_id": "system", 00:25:18.016 "dma_device_type": 1 00:25:18.016 }, 00:25:18.016 { 00:25:18.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.016 "dma_device_type": 2 00:25:18.016 } 00:25:18.016 ], 00:25:18.016 "driver_specific": {} 00:25:18.016 } 00:25:18.016 ] 00:25:18.016 12:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:18.016 12:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:18.016 12:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:18.016 12:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:18.274 BaseBdev4 00:25:18.274 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:25:18.274 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:25:18.274 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:18.274 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:18.274 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:18.274 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:18.274 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:18.532 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:25:18.790 [ 00:25:18.790 { 00:25:18.790 "name": "BaseBdev4", 00:25:18.790 "aliases": [ 00:25:18.790 "dcf0fa4c-712a-45bb-9700-5edc5bbf969b" 00:25:18.790 ], 00:25:18.790 "product_name": "Malloc disk", 00:25:18.790 "block_size": 512, 00:25:18.790 "num_blocks": 65536, 00:25:18.790 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:18.790 "assigned_rate_limits": { 00:25:18.790 "rw_ios_per_sec": 0, 00:25:18.790 "rw_mbytes_per_sec": 0, 00:25:18.790 "r_mbytes_per_sec": 0, 00:25:18.790 "w_mbytes_per_sec": 0 00:25:18.790 }, 00:25:18.790 "claimed": false, 00:25:18.790 "zoned": false, 00:25:18.790 "supported_io_types": { 00:25:18.790 "read": true, 00:25:18.790 "write": true, 00:25:18.790 "unmap": true, 00:25:18.790 "write_zeroes": true, 00:25:18.790 "flush": true, 00:25:18.790 "reset": true, 00:25:18.790 "compare": false, 00:25:18.790 "compare_and_write": false, 00:25:18.790 "abort": true, 00:25:18.790 "nvme_admin": false, 00:25:18.790 "nvme_io": false 00:25:18.790 }, 00:25:18.790 "memory_domains": [ 00:25:18.790 { 00:25:18.790 "dma_device_id": "system", 00:25:18.790 "dma_device_type": 1 00:25:18.790 }, 00:25:18.790 { 00:25:18.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:18.790 "dma_device_type": 2 00:25:18.790 } 00:25:18.790 ], 00:25:18.790 "driver_specific": {} 00:25:18.790 } 00:25:18.790 ] 00:25:18.790 12:07:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:18.790 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:25:18.790 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:25:18.790 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:25:19.048 [2024-07-21 12:07:17.823663] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:25:19.048 [2024-07-21 12:07:17.824584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:25:19.048 [2024-07-21 12:07:17.824792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:19.048 [2024-07-21 12:07:17.826984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:19.048 [2024-07-21 12:07:17.827196] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.048 12:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:19.306 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:19.306 "name": "Existed_Raid", 00:25:19.306 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:19.306 "strip_size_kb": 0, 00:25:19.306 "state": "configuring", 00:25:19.306 "raid_level": "raid1", 00:25:19.306 "superblock": true, 00:25:19.306 "num_base_bdevs": 4, 00:25:19.306 "num_base_bdevs_discovered": 3, 00:25:19.306 "num_base_bdevs_operational": 4, 00:25:19.306 "base_bdevs_list": [ 00:25:19.306 { 00:25:19.306 "name": "BaseBdev1", 00:25:19.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:19.306 "is_configured": false, 00:25:19.306 "data_offset": 0, 00:25:19.306 "data_size": 0 00:25:19.306 }, 00:25:19.306 { 00:25:19.306 "name": "BaseBdev2", 00:25:19.306 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:19.306 "is_configured": true, 00:25:19.306 "data_offset": 2048, 00:25:19.306 "data_size": 63488 00:25:19.306 }, 00:25:19.306 { 00:25:19.306 "name": "BaseBdev3", 00:25:19.307 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:19.307 "is_configured": true, 00:25:19.307 "data_offset": 2048, 00:25:19.307 "data_size": 63488 00:25:19.307 }, 00:25:19.307 { 00:25:19.307 "name": "BaseBdev4", 00:25:19.307 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:19.307 "is_configured": true, 00:25:19.307 "data_offset": 2048, 00:25:19.307 "data_size": 63488 00:25:19.307 } 00:25:19.307 ] 00:25:19.307 }' 00:25:19.307 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:19.307 12:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:19.873 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:25:20.131 [2024-07-21 12:07:18.951907] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.131 12:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:20.390 12:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:20.390 "name": "Existed_Raid", 00:25:20.390 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:20.390 "strip_size_kb": 0, 00:25:20.390 "state": "configuring", 00:25:20.390 "raid_level": "raid1", 00:25:20.390 "superblock": true, 00:25:20.390 "num_base_bdevs": 4, 00:25:20.390 "num_base_bdevs_discovered": 2, 00:25:20.390 "num_base_bdevs_operational": 4, 00:25:20.390 "base_bdevs_list": [ 00:25:20.390 { 00:25:20.390 "name": "BaseBdev1", 00:25:20.390 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:20.390 "is_configured": false, 00:25:20.390 "data_offset": 0, 00:25:20.390 "data_size": 0 00:25:20.390 }, 00:25:20.390 { 00:25:20.390 "name": null, 00:25:20.390 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:20.390 "is_configured": false, 00:25:20.390 "data_offset": 2048, 00:25:20.390 "data_size": 63488 00:25:20.390 }, 00:25:20.390 { 00:25:20.390 "name": "BaseBdev3", 00:25:20.390 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:20.390 "is_configured": true, 00:25:20.390 "data_offset": 2048, 00:25:20.390 "data_size": 63488 00:25:20.390 }, 00:25:20.390 { 00:25:20.390 "name": "BaseBdev4", 00:25:20.390 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:20.390 "is_configured": true, 00:25:20.390 "data_offset": 2048, 00:25:20.390 "data_size": 63488 00:25:20.390 } 00:25:20.390 ] 00:25:20.390 }' 00:25:20.390 12:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:20.390 12:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:21.323 12:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:21.323 12:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.323 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:25:21.323 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:21.581 [2024-07-21 12:07:20.349249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:21.581 BaseBdev1 00:25:21.581 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:25:21.581 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:25:21.581 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:21.581 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:21.581 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:21.581 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:21.581 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:21.839 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:25:22.096 [ 00:25:22.096 { 00:25:22.096 "name": "BaseBdev1", 00:25:22.096 "aliases": [ 00:25:22.096 "7594dc67-2ea4-447d-8cbc-9d367b8d6bac" 00:25:22.096 ], 00:25:22.096 "product_name": "Malloc disk", 00:25:22.096 "block_size": 512, 00:25:22.096 "num_blocks": 65536, 00:25:22.096 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:22.096 "assigned_rate_limits": { 00:25:22.096 "rw_ios_per_sec": 0, 00:25:22.096 "rw_mbytes_per_sec": 0, 00:25:22.096 "r_mbytes_per_sec": 0, 00:25:22.096 "w_mbytes_per_sec": 0 00:25:22.096 }, 00:25:22.096 "claimed": true, 00:25:22.096 "claim_type": "exclusive_write", 00:25:22.096 "zoned": false, 00:25:22.096 "supported_io_types": { 00:25:22.096 "read": true, 00:25:22.096 "write": true, 00:25:22.096 "unmap": true, 00:25:22.096 "write_zeroes": true, 00:25:22.096 "flush": true, 00:25:22.096 "reset": true, 00:25:22.096 "compare": false, 00:25:22.096 "compare_and_write": false, 00:25:22.096 "abort": true, 00:25:22.096 "nvme_admin": false, 00:25:22.096 "nvme_io": false 00:25:22.096 }, 00:25:22.096 "memory_domains": [ 00:25:22.096 { 00:25:22.096 "dma_device_id": "system", 00:25:22.096 "dma_device_type": 1 00:25:22.096 }, 00:25:22.096 { 00:25:22.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:22.096 "dma_device_type": 2 00:25:22.096 } 00:25:22.096 ], 00:25:22.096 "driver_specific": {} 00:25:22.096 } 00:25:22.096 ] 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.096 12:07:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:22.353 12:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:22.353 "name": "Existed_Raid", 00:25:22.353 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:22.353 "strip_size_kb": 0, 00:25:22.353 "state": "configuring", 00:25:22.353 "raid_level": "raid1", 00:25:22.353 "superblock": true, 00:25:22.353 "num_base_bdevs": 4, 00:25:22.353 "num_base_bdevs_discovered": 3, 00:25:22.353 "num_base_bdevs_operational": 4, 00:25:22.353 "base_bdevs_list": [ 00:25:22.353 { 00:25:22.353 "name": "BaseBdev1", 00:25:22.353 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:22.353 "is_configured": true, 00:25:22.353 "data_offset": 2048, 00:25:22.353 "data_size": 63488 00:25:22.353 }, 00:25:22.353 { 00:25:22.353 "name": null, 00:25:22.353 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:22.353 "is_configured": false, 00:25:22.353 "data_offset": 2048, 00:25:22.353 "data_size": 63488 00:25:22.353 }, 00:25:22.353 { 00:25:22.353 "name": "BaseBdev3", 00:25:22.353 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:22.353 "is_configured": true, 00:25:22.353 "data_offset": 2048, 00:25:22.353 "data_size": 63488 00:25:22.353 }, 00:25:22.353 { 00:25:22.353 "name": "BaseBdev4", 00:25:22.353 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:22.353 "is_configured": true, 00:25:22.353 "data_offset": 2048, 00:25:22.353 "data_size": 63488 00:25:22.353 } 00:25:22.353 ] 00:25:22.353 }' 00:25:22.353 12:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:22.353 12:07:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:22.917 12:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.917 12:07:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:23.173 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:25:23.173 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:25:23.430 [2024-07-21 12:07:22.261758] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:23.430 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:23.995 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:23.995 "name": "Existed_Raid", 00:25:23.995 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:23.995 "strip_size_kb": 0, 00:25:23.995 "state": "configuring", 00:25:23.995 "raid_level": "raid1", 00:25:23.995 "superblock": true, 00:25:23.995 "num_base_bdevs": 4, 00:25:23.995 "num_base_bdevs_discovered": 2, 00:25:23.995 "num_base_bdevs_operational": 4, 00:25:23.995 "base_bdevs_list": [ 00:25:23.995 { 00:25:23.995 "name": "BaseBdev1", 00:25:23.995 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:23.995 "is_configured": true, 00:25:23.995 "data_offset": 2048, 00:25:23.995 "data_size": 63488 00:25:23.995 }, 00:25:23.995 { 00:25:23.995 "name": null, 00:25:23.995 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:23.995 "is_configured": false, 00:25:23.995 "data_offset": 2048, 00:25:23.995 "data_size": 63488 00:25:23.995 }, 00:25:23.995 { 00:25:23.995 "name": null, 00:25:23.995 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:23.995 "is_configured": false, 00:25:23.995 "data_offset": 2048, 00:25:23.995 "data_size": 63488 00:25:23.995 }, 00:25:23.995 { 00:25:23.995 "name": "BaseBdev4", 00:25:23.995 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:23.995 "is_configured": true, 00:25:23.995 "data_offset": 2048, 00:25:23.995 "data_size": 63488 00:25:23.995 } 00:25:23.995 ] 00:25:23.995 }' 00:25:23.995 12:07:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:23.995 12:07:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:24.559 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:24.559 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:24.816 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:25:24.816 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:25:25.072 [2024-07-21 12:07:23.690530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:25.072 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:25.329 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:25.329 "name": "Existed_Raid", 00:25:25.329 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:25.329 "strip_size_kb": 0, 00:25:25.329 "state": "configuring", 00:25:25.329 "raid_level": "raid1", 00:25:25.329 "superblock": true, 00:25:25.329 "num_base_bdevs": 4, 00:25:25.329 "num_base_bdevs_discovered": 3, 00:25:25.329 "num_base_bdevs_operational": 4, 00:25:25.329 "base_bdevs_list": [ 00:25:25.329 { 00:25:25.329 "name": "BaseBdev1", 00:25:25.329 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:25.329 "is_configured": true, 00:25:25.329 "data_offset": 2048, 00:25:25.329 "data_size": 63488 00:25:25.329 }, 00:25:25.329 { 00:25:25.329 "name": null, 00:25:25.329 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:25.329 "is_configured": false, 00:25:25.329 "data_offset": 2048, 00:25:25.329 "data_size": 63488 00:25:25.329 }, 00:25:25.329 { 00:25:25.329 "name": "BaseBdev3", 00:25:25.329 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:25.329 "is_configured": true, 00:25:25.329 "data_offset": 2048, 00:25:25.329 "data_size": 63488 00:25:25.329 }, 00:25:25.329 { 00:25:25.329 "name": "BaseBdev4", 00:25:25.329 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:25.329 "is_configured": true, 00:25:25.329 "data_offset": 2048, 00:25:25.329 "data_size": 63488 00:25:25.329 } 00:25:25.329 ] 00:25:25.329 }' 00:25:25.329 12:07:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:25.329 12:07:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:25.892 12:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:25:25.892 12:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.150 12:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:25:26.150 12:07:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:25:26.407 [2024-07-21 12:07:25.231240] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:26.407 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:26.665 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:26.665 "name": "Existed_Raid", 00:25:26.665 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:26.665 "strip_size_kb": 0, 00:25:26.665 "state": "configuring", 00:25:26.665 "raid_level": "raid1", 00:25:26.665 "superblock": true, 00:25:26.665 "num_base_bdevs": 4, 00:25:26.665 "num_base_bdevs_discovered": 2, 00:25:26.665 "num_base_bdevs_operational": 4, 00:25:26.665 "base_bdevs_list": [ 00:25:26.665 { 00:25:26.665 "name": null, 00:25:26.665 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:26.665 "is_configured": false, 00:25:26.665 "data_offset": 2048, 00:25:26.665 "data_size": 63488 00:25:26.665 }, 00:25:26.665 { 00:25:26.665 "name": null, 00:25:26.665 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:26.665 "is_configured": false, 00:25:26.665 "data_offset": 2048, 00:25:26.665 "data_size": 63488 00:25:26.665 }, 00:25:26.665 { 00:25:26.665 "name": "BaseBdev3", 00:25:26.665 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:26.665 "is_configured": true, 00:25:26.665 "data_offset": 2048, 00:25:26.665 "data_size": 63488 00:25:26.665 }, 00:25:26.665 { 00:25:26.665 "name": "BaseBdev4", 00:25:26.665 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:26.665 "is_configured": true, 00:25:26.665 "data_offset": 2048, 00:25:26.665 "data_size": 63488 00:25:26.665 } 00:25:26.665 ] 00:25:26.665 }' 00:25:26.665 12:07:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:26.665 12:07:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:27.597 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.597 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:25:27.597 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:25:27.597 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:25:27.855 [2024-07-21 12:07:26.643066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:27.855 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:28.113 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:28.113 "name": "Existed_Raid", 00:25:28.113 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:28.113 "strip_size_kb": 0, 00:25:28.113 "state": "configuring", 00:25:28.113 "raid_level": "raid1", 00:25:28.113 "superblock": true, 00:25:28.113 "num_base_bdevs": 4, 00:25:28.113 "num_base_bdevs_discovered": 3, 00:25:28.113 "num_base_bdevs_operational": 4, 00:25:28.113 "base_bdevs_list": [ 00:25:28.113 { 00:25:28.113 "name": null, 00:25:28.113 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:28.113 "is_configured": false, 00:25:28.113 "data_offset": 2048, 00:25:28.113 "data_size": 63488 00:25:28.113 }, 00:25:28.113 { 00:25:28.113 "name": "BaseBdev2", 00:25:28.113 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:28.113 "is_configured": true, 00:25:28.113 "data_offset": 2048, 00:25:28.113 "data_size": 63488 00:25:28.113 }, 00:25:28.113 { 00:25:28.113 "name": "BaseBdev3", 00:25:28.113 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:28.113 "is_configured": true, 00:25:28.113 "data_offset": 2048, 00:25:28.113 "data_size": 63488 00:25:28.113 }, 00:25:28.113 { 00:25:28.113 "name": "BaseBdev4", 00:25:28.113 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:28.113 "is_configured": true, 00:25:28.113 "data_offset": 2048, 00:25:28.113 "data_size": 63488 00:25:28.113 } 00:25:28.113 ] 00:25:28.113 }' 00:25:28.113 12:07:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:28.113 12:07:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:28.679 12:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.679 12:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:25:28.937 12:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:25:28.937 12:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.937 12:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:25:29.194 12:07:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 7594dc67-2ea4-447d-8cbc-9d367b8d6bac 00:25:29.451 [2024-07-21 12:07:28.248234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:25:29.451 [2024-07-21 12:07:28.248811] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:25:29.451 [2024-07-21 12:07:28.248954] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:29.451 [2024-07-21 12:07:28.249089] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:25:29.451 [2024-07-21 12:07:28.249585] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:25:29.451 NewBaseBdev 00:25:29.451 [2024-07-21 12:07:28.249746] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:25:29.451 [2024-07-21 12:07:28.249877] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:29.451 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:25:29.451 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:25:29.451 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:25:29.451 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:25:29.451 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:25:29.451 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:25:29.451 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:25:29.709 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:25:29.965 [ 00:25:29.965 { 00:25:29.965 "name": "NewBaseBdev", 00:25:29.965 "aliases": [ 00:25:29.965 "7594dc67-2ea4-447d-8cbc-9d367b8d6bac" 00:25:29.965 ], 00:25:29.965 "product_name": "Malloc disk", 00:25:29.965 "block_size": 512, 00:25:29.965 "num_blocks": 65536, 00:25:29.965 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:29.965 "assigned_rate_limits": { 00:25:29.965 "rw_ios_per_sec": 0, 00:25:29.965 "rw_mbytes_per_sec": 0, 00:25:29.965 "r_mbytes_per_sec": 0, 00:25:29.965 "w_mbytes_per_sec": 0 00:25:29.965 }, 00:25:29.965 "claimed": true, 00:25:29.965 "claim_type": "exclusive_write", 00:25:29.965 "zoned": false, 00:25:29.965 "supported_io_types": { 00:25:29.965 "read": true, 00:25:29.965 "write": true, 00:25:29.965 "unmap": true, 00:25:29.965 "write_zeroes": true, 00:25:29.965 "flush": true, 00:25:29.965 "reset": true, 00:25:29.965 "compare": false, 00:25:29.965 "compare_and_write": false, 00:25:29.965 "abort": true, 00:25:29.965 "nvme_admin": false, 00:25:29.965 "nvme_io": false 00:25:29.965 }, 00:25:29.965 "memory_domains": [ 00:25:29.965 { 00:25:29.965 "dma_device_id": "system", 00:25:29.965 "dma_device_type": 1 00:25:29.965 }, 00:25:29.966 { 00:25:29.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:29.966 "dma_device_type": 2 00:25:29.966 } 00:25:29.966 ], 00:25:29.966 "driver_specific": {} 00:25:29.966 } 00:25:29.966 ] 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.966 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:25:30.222 12:07:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:30.222 "name": "Existed_Raid", 00:25:30.222 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:30.222 "strip_size_kb": 0, 00:25:30.222 "state": "online", 00:25:30.222 "raid_level": "raid1", 00:25:30.222 "superblock": true, 00:25:30.222 "num_base_bdevs": 4, 00:25:30.222 "num_base_bdevs_discovered": 4, 00:25:30.222 "num_base_bdevs_operational": 4, 00:25:30.222 "base_bdevs_list": [ 00:25:30.222 { 00:25:30.222 "name": "NewBaseBdev", 00:25:30.222 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:30.222 "is_configured": true, 00:25:30.222 "data_offset": 2048, 00:25:30.222 "data_size": 63488 00:25:30.222 }, 00:25:30.222 { 00:25:30.222 "name": "BaseBdev2", 00:25:30.222 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:30.222 "is_configured": true, 00:25:30.222 "data_offset": 2048, 00:25:30.222 "data_size": 63488 00:25:30.222 }, 00:25:30.222 { 00:25:30.222 "name": "BaseBdev3", 00:25:30.222 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:30.222 "is_configured": true, 00:25:30.222 "data_offset": 2048, 00:25:30.222 "data_size": 63488 00:25:30.222 }, 00:25:30.223 { 00:25:30.223 "name": "BaseBdev4", 00:25:30.223 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:30.223 "is_configured": true, 00:25:30.223 "data_offset": 2048, 00:25:30.223 "data_size": 63488 00:25:30.223 } 00:25:30.223 ] 00:25:30.223 }' 00:25:30.223 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:30.223 12:07:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:25:30.786 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:31.044 [2024-07-21 12:07:29.876995] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:31.044 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:31.044 "name": "Existed_Raid", 00:25:31.044 "aliases": [ 00:25:31.044 "778ce699-9e72-4ac6-9b89-849b64a16e05" 00:25:31.044 ], 00:25:31.044 "product_name": "Raid Volume", 00:25:31.044 "block_size": 512, 00:25:31.044 "num_blocks": 63488, 00:25:31.044 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:31.044 "assigned_rate_limits": { 00:25:31.044 "rw_ios_per_sec": 0, 00:25:31.044 "rw_mbytes_per_sec": 0, 00:25:31.044 "r_mbytes_per_sec": 0, 00:25:31.044 "w_mbytes_per_sec": 0 00:25:31.044 }, 00:25:31.044 "claimed": false, 00:25:31.044 "zoned": false, 00:25:31.044 "supported_io_types": { 00:25:31.044 "read": true, 00:25:31.044 "write": true, 00:25:31.044 "unmap": false, 00:25:31.044 "write_zeroes": true, 00:25:31.044 "flush": false, 00:25:31.044 "reset": true, 00:25:31.044 "compare": false, 00:25:31.044 "compare_and_write": false, 00:25:31.044 "abort": false, 00:25:31.044 "nvme_admin": false, 00:25:31.044 "nvme_io": false 00:25:31.044 }, 00:25:31.044 "memory_domains": [ 00:25:31.044 { 00:25:31.044 "dma_device_id": "system", 00:25:31.044 "dma_device_type": 1 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.044 "dma_device_type": 2 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "dma_device_id": "system", 00:25:31.044 "dma_device_type": 1 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.044 "dma_device_type": 2 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "dma_device_id": "system", 00:25:31.044 "dma_device_type": 1 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.044 "dma_device_type": 2 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "dma_device_id": "system", 00:25:31.044 "dma_device_type": 1 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.044 "dma_device_type": 2 00:25:31.044 } 00:25:31.044 ], 00:25:31.044 "driver_specific": { 00:25:31.044 "raid": { 00:25:31.044 "uuid": "778ce699-9e72-4ac6-9b89-849b64a16e05", 00:25:31.044 "strip_size_kb": 0, 00:25:31.044 "state": "online", 00:25:31.044 "raid_level": "raid1", 00:25:31.044 "superblock": true, 00:25:31.044 "num_base_bdevs": 4, 00:25:31.044 "num_base_bdevs_discovered": 4, 00:25:31.044 "num_base_bdevs_operational": 4, 00:25:31.044 "base_bdevs_list": [ 00:25:31.044 { 00:25:31.044 "name": "NewBaseBdev", 00:25:31.044 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:31.044 "is_configured": true, 00:25:31.044 "data_offset": 2048, 00:25:31.044 "data_size": 63488 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "name": "BaseBdev2", 00:25:31.044 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:31.044 "is_configured": true, 00:25:31.044 "data_offset": 2048, 00:25:31.044 "data_size": 63488 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "name": "BaseBdev3", 00:25:31.044 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:31.044 "is_configured": true, 00:25:31.044 "data_offset": 2048, 00:25:31.044 "data_size": 63488 00:25:31.044 }, 00:25:31.044 { 00:25:31.044 "name": "BaseBdev4", 00:25:31.044 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:31.044 "is_configured": true, 00:25:31.044 "data_offset": 2048, 00:25:31.044 "data_size": 63488 00:25:31.044 } 00:25:31.044 ] 00:25:31.044 } 00:25:31.044 } 00:25:31.044 }' 00:25:31.044 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:31.301 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:25:31.301 BaseBdev2 00:25:31.301 BaseBdev3 00:25:31.301 BaseBdev4' 00:25:31.301 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.301 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:25:31.301 12:07:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.559 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:31.559 "name": "NewBaseBdev", 00:25:31.559 "aliases": [ 00:25:31.559 "7594dc67-2ea4-447d-8cbc-9d367b8d6bac" 00:25:31.559 ], 00:25:31.559 "product_name": "Malloc disk", 00:25:31.559 "block_size": 512, 00:25:31.559 "num_blocks": 65536, 00:25:31.559 "uuid": "7594dc67-2ea4-447d-8cbc-9d367b8d6bac", 00:25:31.559 "assigned_rate_limits": { 00:25:31.559 "rw_ios_per_sec": 0, 00:25:31.559 "rw_mbytes_per_sec": 0, 00:25:31.559 "r_mbytes_per_sec": 0, 00:25:31.559 "w_mbytes_per_sec": 0 00:25:31.559 }, 00:25:31.559 "claimed": true, 00:25:31.559 "claim_type": "exclusive_write", 00:25:31.559 "zoned": false, 00:25:31.559 "supported_io_types": { 00:25:31.559 "read": true, 00:25:31.559 "write": true, 00:25:31.559 "unmap": true, 00:25:31.559 "write_zeroes": true, 00:25:31.559 "flush": true, 00:25:31.559 "reset": true, 00:25:31.559 "compare": false, 00:25:31.559 "compare_and_write": false, 00:25:31.559 "abort": true, 00:25:31.559 "nvme_admin": false, 00:25:31.559 "nvme_io": false 00:25:31.559 }, 00:25:31.559 "memory_domains": [ 00:25:31.559 { 00:25:31.559 "dma_device_id": "system", 00:25:31.559 "dma_device_type": 1 00:25:31.559 }, 00:25:31.559 { 00:25:31.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:31.559 "dma_device_type": 2 00:25:31.559 } 00:25:31.559 ], 00:25:31.559 "driver_specific": {} 00:25:31.559 }' 00:25:31.559 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.559 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:31.559 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:31.559 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.559 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:31.559 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:31.815 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:25:32.073 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.073 "name": "BaseBdev2", 00:25:32.073 "aliases": [ 00:25:32.073 "79c01531-5aff-472d-b1bb-b08ff6e0383d" 00:25:32.073 ], 00:25:32.073 "product_name": "Malloc disk", 00:25:32.073 "block_size": 512, 00:25:32.073 "num_blocks": 65536, 00:25:32.073 "uuid": "79c01531-5aff-472d-b1bb-b08ff6e0383d", 00:25:32.073 "assigned_rate_limits": { 00:25:32.073 "rw_ios_per_sec": 0, 00:25:32.073 "rw_mbytes_per_sec": 0, 00:25:32.073 "r_mbytes_per_sec": 0, 00:25:32.073 "w_mbytes_per_sec": 0 00:25:32.073 }, 00:25:32.073 "claimed": true, 00:25:32.073 "claim_type": "exclusive_write", 00:25:32.073 "zoned": false, 00:25:32.073 "supported_io_types": { 00:25:32.073 "read": true, 00:25:32.073 "write": true, 00:25:32.073 "unmap": true, 00:25:32.073 "write_zeroes": true, 00:25:32.073 "flush": true, 00:25:32.073 "reset": true, 00:25:32.073 "compare": false, 00:25:32.073 "compare_and_write": false, 00:25:32.073 "abort": true, 00:25:32.073 "nvme_admin": false, 00:25:32.073 "nvme_io": false 00:25:32.073 }, 00:25:32.073 "memory_domains": [ 00:25:32.073 { 00:25:32.073 "dma_device_id": "system", 00:25:32.073 "dma_device_type": 1 00:25:32.073 }, 00:25:32.073 { 00:25:32.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.073 "dma_device_type": 2 00:25:32.073 } 00:25:32.073 ], 00:25:32.073 "driver_specific": {} 00:25:32.073 }' 00:25:32.073 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.073 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.330 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:32.331 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.331 12:07:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.331 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:32.331 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.331 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:32.331 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:32.331 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.331 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:32.587 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:32.587 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:32.587 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:25:32.587 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:32.844 "name": "BaseBdev3", 00:25:32.844 "aliases": [ 00:25:32.844 "58e5baa4-4292-47ca-88e2-b322a99786b2" 00:25:32.844 ], 00:25:32.844 "product_name": "Malloc disk", 00:25:32.844 "block_size": 512, 00:25:32.844 "num_blocks": 65536, 00:25:32.844 "uuid": "58e5baa4-4292-47ca-88e2-b322a99786b2", 00:25:32.844 "assigned_rate_limits": { 00:25:32.844 "rw_ios_per_sec": 0, 00:25:32.844 "rw_mbytes_per_sec": 0, 00:25:32.844 "r_mbytes_per_sec": 0, 00:25:32.844 "w_mbytes_per_sec": 0 00:25:32.844 }, 00:25:32.844 "claimed": true, 00:25:32.844 "claim_type": "exclusive_write", 00:25:32.844 "zoned": false, 00:25:32.844 "supported_io_types": { 00:25:32.844 "read": true, 00:25:32.844 "write": true, 00:25:32.844 "unmap": true, 00:25:32.844 "write_zeroes": true, 00:25:32.844 "flush": true, 00:25:32.844 "reset": true, 00:25:32.844 "compare": false, 00:25:32.844 "compare_and_write": false, 00:25:32.844 "abort": true, 00:25:32.844 "nvme_admin": false, 00:25:32.844 "nvme_io": false 00:25:32.844 }, 00:25:32.844 "memory_domains": [ 00:25:32.844 { 00:25:32.844 "dma_device_id": "system", 00:25:32.844 "dma_device_type": 1 00:25:32.844 }, 00:25:32.844 { 00:25:32.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:32.844 "dma_device_type": 2 00:25:32.844 } 00:25:32.844 ], 00:25:32.844 "driver_specific": {} 00:25:32.844 }' 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:32.844 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:25:33.102 12:07:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:33.360 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:33.360 "name": "BaseBdev4", 00:25:33.360 "aliases": [ 00:25:33.360 "dcf0fa4c-712a-45bb-9700-5edc5bbf969b" 00:25:33.360 ], 00:25:33.360 "product_name": "Malloc disk", 00:25:33.360 "block_size": 512, 00:25:33.360 "num_blocks": 65536, 00:25:33.360 "uuid": "dcf0fa4c-712a-45bb-9700-5edc5bbf969b", 00:25:33.360 "assigned_rate_limits": { 00:25:33.360 "rw_ios_per_sec": 0, 00:25:33.360 "rw_mbytes_per_sec": 0, 00:25:33.360 "r_mbytes_per_sec": 0, 00:25:33.360 "w_mbytes_per_sec": 0 00:25:33.360 }, 00:25:33.360 "claimed": true, 00:25:33.360 "claim_type": "exclusive_write", 00:25:33.360 "zoned": false, 00:25:33.360 "supported_io_types": { 00:25:33.360 "read": true, 00:25:33.360 "write": true, 00:25:33.360 "unmap": true, 00:25:33.360 "write_zeroes": true, 00:25:33.360 "flush": true, 00:25:33.360 "reset": true, 00:25:33.360 "compare": false, 00:25:33.360 "compare_and_write": false, 00:25:33.360 "abort": true, 00:25:33.360 "nvme_admin": false, 00:25:33.360 "nvme_io": false 00:25:33.360 }, 00:25:33.360 "memory_domains": [ 00:25:33.360 { 00:25:33.360 "dma_device_id": "system", 00:25:33.360 "dma_device_type": 1 00:25:33.360 }, 00:25:33.360 { 00:25:33.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:33.360 "dma_device_type": 2 00:25:33.360 } 00:25:33.360 ], 00:25:33.360 "driver_specific": {} 00:25:33.360 }' 00:25:33.360 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.360 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:33.617 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.875 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:33.875 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:33.875 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:25:34.132 [2024-07-21 12:07:32.769376] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:25:34.132 [2024-07-21 12:07:32.769673] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:34.132 [2024-07-21 12:07:32.769860] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:34.132 [2024-07-21 12:07:32.770296] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:34.132 [2024-07-21 12:07:32.770422] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:25:34.132 12:07:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 152068 00:25:34.132 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 152068 ']' 00:25:34.132 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 152068 00:25:34.132 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:25:34.132 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:34.132 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 152068 00:25:34.132 killing process with pid 152068 00:25:34.132 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:34.133 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:34.133 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 152068' 00:25:34.133 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 152068 00:25:34.133 12:07:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 152068 00:25:34.133 [2024-07-21 12:07:32.815007] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:34.133 [2024-07-21 12:07:32.857319] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:34.390 ************************************ 00:25:34.390 END TEST raid_state_function_test_sb 00:25:34.390 ************************************ 00:25:34.390 12:07:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:25:34.390 00:25:34.390 real 0m34.664s 00:25:34.390 user 1m5.925s 00:25:34.390 sys 0m4.177s 00:25:34.390 12:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:34.390 12:07:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:25:34.390 12:07:33 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:25:34.390 12:07:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:25:34.390 12:07:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:34.390 12:07:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:34.390 ************************************ 00:25:34.390 START TEST raid_superblock_test 00:25:34.390 ************************************ 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 4 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=153169 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 153169 /var/tmp/spdk-raid.sock 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 153169 ']' 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:34.390 12:07:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:34.390 [2024-07-21 12:07:33.219449] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:34.390 [2024-07-21 12:07:33.219986] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid153169 ] 00:25:34.647 [2024-07-21 12:07:33.384519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.647 [2024-07-21 12:07:33.483596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.905 [2024-07-21 12:07:33.539910] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.470 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:25:35.728 malloc1 00:25:35.728 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:35.986 [2024-07-21 12:07:34.712707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:35.986 [2024-07-21 12:07:34.713168] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:35.986 [2024-07-21 12:07:34.713354] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:25:35.986 [2024-07-21 12:07:34.713547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:35.986 [2024-07-21 12:07:34.716573] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:35.986 [2024-07-21 12:07:34.716767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:35.986 pt1 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:35.986 12:07:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:25:36.242 malloc2 00:25:36.242 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:36.501 [2024-07-21 12:07:35.220898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:36.501 [2024-07-21 12:07:35.221224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:36.501 [2024-07-21 12:07:35.221416] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:25:36.501 [2024-07-21 12:07:35.221568] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:36.501 [2024-07-21 12:07:35.224205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:36.501 [2024-07-21 12:07:35.224381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:36.501 pt2 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:36.501 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:25:36.765 malloc3 00:25:36.765 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:37.027 [2024-07-21 12:07:35.753162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:37.027 [2024-07-21 12:07:35.753343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.027 [2024-07-21 12:07:35.753468] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:37.027 [2024-07-21 12:07:35.753564] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.027 [2024-07-21 12:07:35.756338] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.027 [2024-07-21 12:07:35.756557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:37.027 pt3 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:25:37.027 12:07:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:25:37.284 malloc4 00:25:37.284 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:37.549 [2024-07-21 12:07:36.280747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:37.549 [2024-07-21 12:07:36.281050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:37.549 [2024-07-21 12:07:36.281214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:25:37.549 [2024-07-21 12:07:36.281401] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:37.549 [2024-07-21 12:07:36.284190] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:37.549 [2024-07-21 12:07:36.284385] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:37.549 pt4 00:25:37.549 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:25:37.549 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:25:37.549 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:25:37.806 [2024-07-21 12:07:36.552979] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:37.806 [2024-07-21 12:07:36.555652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:37.806 [2024-07-21 12:07:36.555858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:37.806 [2024-07-21 12:07:36.556073] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:37.806 [2024-07-21 12:07:36.556467] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:25:37.806 [2024-07-21 12:07:36.556608] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:37.806 [2024-07-21 12:07:36.556864] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:25:37.806 [2024-07-21 12:07:36.557434] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:25:37.806 [2024-07-21 12:07:36.557583] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:25:37.806 [2024-07-21 12:07:36.557928] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:37.806 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.063 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:38.064 "name": "raid_bdev1", 00:25:38.064 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:38.064 "strip_size_kb": 0, 00:25:38.064 "state": "online", 00:25:38.064 "raid_level": "raid1", 00:25:38.064 "superblock": true, 00:25:38.064 "num_base_bdevs": 4, 00:25:38.064 "num_base_bdevs_discovered": 4, 00:25:38.064 "num_base_bdevs_operational": 4, 00:25:38.064 "base_bdevs_list": [ 00:25:38.064 { 00:25:38.064 "name": "pt1", 00:25:38.064 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:38.064 "is_configured": true, 00:25:38.064 "data_offset": 2048, 00:25:38.064 "data_size": 63488 00:25:38.064 }, 00:25:38.064 { 00:25:38.064 "name": "pt2", 00:25:38.064 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:38.064 "is_configured": true, 00:25:38.064 "data_offset": 2048, 00:25:38.064 "data_size": 63488 00:25:38.064 }, 00:25:38.064 { 00:25:38.064 "name": "pt3", 00:25:38.064 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:38.064 "is_configured": true, 00:25:38.064 "data_offset": 2048, 00:25:38.064 "data_size": 63488 00:25:38.064 }, 00:25:38.064 { 00:25:38.064 "name": "pt4", 00:25:38.064 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:38.064 "is_configured": true, 00:25:38.064 "data_offset": 2048, 00:25:38.064 "data_size": 63488 00:25:38.064 } 00:25:38.064 ] 00:25:38.064 }' 00:25:38.064 12:07:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:38.064 12:07:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:38.629 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:25:38.629 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:38.629 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:38.888 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:38.888 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:38.888 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:38.888 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:38.888 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:38.888 [2024-07-21 12:07:37.714483] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:38.888 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:38.888 "name": "raid_bdev1", 00:25:38.888 "aliases": [ 00:25:38.888 "98291dd4-90e2-4280-9612-104723ec7803" 00:25:38.888 ], 00:25:38.888 "product_name": "Raid Volume", 00:25:38.888 "block_size": 512, 00:25:38.888 "num_blocks": 63488, 00:25:38.888 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:38.888 "assigned_rate_limits": { 00:25:38.888 "rw_ios_per_sec": 0, 00:25:38.888 "rw_mbytes_per_sec": 0, 00:25:38.888 "r_mbytes_per_sec": 0, 00:25:38.888 "w_mbytes_per_sec": 0 00:25:38.888 }, 00:25:38.888 "claimed": false, 00:25:38.888 "zoned": false, 00:25:38.888 "supported_io_types": { 00:25:38.888 "read": true, 00:25:38.888 "write": true, 00:25:38.888 "unmap": false, 00:25:38.888 "write_zeroes": true, 00:25:38.888 "flush": false, 00:25:38.888 "reset": true, 00:25:38.888 "compare": false, 00:25:38.888 "compare_and_write": false, 00:25:38.888 "abort": false, 00:25:38.888 "nvme_admin": false, 00:25:38.888 "nvme_io": false 00:25:38.888 }, 00:25:38.888 "memory_domains": [ 00:25:38.888 { 00:25:38.888 "dma_device_id": "system", 00:25:38.888 "dma_device_type": 1 00:25:38.888 }, 00:25:38.888 { 00:25:38.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.888 "dma_device_type": 2 00:25:38.888 }, 00:25:38.888 { 00:25:38.888 "dma_device_id": "system", 00:25:38.888 "dma_device_type": 1 00:25:38.888 }, 00:25:38.888 { 00:25:38.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.888 "dma_device_type": 2 00:25:38.888 }, 00:25:38.888 { 00:25:38.888 "dma_device_id": "system", 00:25:38.888 "dma_device_type": 1 00:25:38.888 }, 00:25:38.888 { 00:25:38.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.888 "dma_device_type": 2 00:25:38.888 }, 00:25:38.888 { 00:25:38.888 "dma_device_id": "system", 00:25:38.888 "dma_device_type": 1 00:25:38.888 }, 00:25:38.888 { 00:25:38.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:38.888 "dma_device_type": 2 00:25:38.888 } 00:25:38.888 ], 00:25:38.888 "driver_specific": { 00:25:38.888 "raid": { 00:25:38.888 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:38.888 "strip_size_kb": 0, 00:25:38.888 "state": "online", 00:25:38.888 "raid_level": "raid1", 00:25:38.888 "superblock": true, 00:25:38.888 "num_base_bdevs": 4, 00:25:38.888 "num_base_bdevs_discovered": 4, 00:25:38.888 "num_base_bdevs_operational": 4, 00:25:38.888 "base_bdevs_list": [ 00:25:38.888 { 00:25:38.888 "name": "pt1", 00:25:38.888 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:38.888 "is_configured": true, 00:25:38.888 "data_offset": 2048, 00:25:38.889 "data_size": 63488 00:25:38.889 }, 00:25:38.889 { 00:25:38.889 "name": "pt2", 00:25:38.889 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:38.889 "is_configured": true, 00:25:38.889 "data_offset": 2048, 00:25:38.889 "data_size": 63488 00:25:38.889 }, 00:25:38.889 { 00:25:38.889 "name": "pt3", 00:25:38.889 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:38.889 "is_configured": true, 00:25:38.889 "data_offset": 2048, 00:25:38.889 "data_size": 63488 00:25:38.889 }, 00:25:38.889 { 00:25:38.889 "name": "pt4", 00:25:38.889 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:38.889 "is_configured": true, 00:25:38.889 "data_offset": 2048, 00:25:38.889 "data_size": 63488 00:25:38.889 } 00:25:38.889 ] 00:25:38.889 } 00:25:38.889 } 00:25:38.889 }' 00:25:38.889 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:39.147 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:39.147 pt2 00:25:39.147 pt3 00:25:39.147 pt4' 00:25:39.147 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:39.147 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:39.147 12:07:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:39.405 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:39.405 "name": "pt1", 00:25:39.405 "aliases": [ 00:25:39.405 "7f836033-3e7b-5bbd-b53b-e3101fe28122" 00:25:39.405 ], 00:25:39.405 "product_name": "passthru", 00:25:39.405 "block_size": 512, 00:25:39.405 "num_blocks": 65536, 00:25:39.405 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:39.405 "assigned_rate_limits": { 00:25:39.405 "rw_ios_per_sec": 0, 00:25:39.405 "rw_mbytes_per_sec": 0, 00:25:39.405 "r_mbytes_per_sec": 0, 00:25:39.405 "w_mbytes_per_sec": 0 00:25:39.405 }, 00:25:39.405 "claimed": true, 00:25:39.405 "claim_type": "exclusive_write", 00:25:39.405 "zoned": false, 00:25:39.405 "supported_io_types": { 00:25:39.405 "read": true, 00:25:39.405 "write": true, 00:25:39.405 "unmap": true, 00:25:39.405 "write_zeroes": true, 00:25:39.405 "flush": true, 00:25:39.405 "reset": true, 00:25:39.405 "compare": false, 00:25:39.405 "compare_and_write": false, 00:25:39.405 "abort": true, 00:25:39.405 "nvme_admin": false, 00:25:39.405 "nvme_io": false 00:25:39.405 }, 00:25:39.405 "memory_domains": [ 00:25:39.405 { 00:25:39.405 "dma_device_id": "system", 00:25:39.405 "dma_device_type": 1 00:25:39.405 }, 00:25:39.405 { 00:25:39.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.405 "dma_device_type": 2 00:25:39.405 } 00:25:39.405 ], 00:25:39.405 "driver_specific": { 00:25:39.405 "passthru": { 00:25:39.405 "name": "pt1", 00:25:39.405 "base_bdev_name": "malloc1" 00:25:39.405 } 00:25:39.405 } 00:25:39.405 }' 00:25:39.405 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.405 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.405 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:39.405 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.406 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.406 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:39.406 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:39.664 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:39.973 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:39.973 "name": "pt2", 00:25:39.973 "aliases": [ 00:25:39.973 "54b3008f-f170-5eb1-b20f-c1ddb931f0ea" 00:25:39.973 ], 00:25:39.973 "product_name": "passthru", 00:25:39.973 "block_size": 512, 00:25:39.973 "num_blocks": 65536, 00:25:39.973 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:39.973 "assigned_rate_limits": { 00:25:39.973 "rw_ios_per_sec": 0, 00:25:39.973 "rw_mbytes_per_sec": 0, 00:25:39.973 "r_mbytes_per_sec": 0, 00:25:39.973 "w_mbytes_per_sec": 0 00:25:39.973 }, 00:25:39.973 "claimed": true, 00:25:39.973 "claim_type": "exclusive_write", 00:25:39.973 "zoned": false, 00:25:39.973 "supported_io_types": { 00:25:39.973 "read": true, 00:25:39.973 "write": true, 00:25:39.973 "unmap": true, 00:25:39.973 "write_zeroes": true, 00:25:39.973 "flush": true, 00:25:39.973 "reset": true, 00:25:39.973 "compare": false, 00:25:39.973 "compare_and_write": false, 00:25:39.973 "abort": true, 00:25:39.973 "nvme_admin": false, 00:25:39.973 "nvme_io": false 00:25:39.973 }, 00:25:39.973 "memory_domains": [ 00:25:39.973 { 00:25:39.973 "dma_device_id": "system", 00:25:39.973 "dma_device_type": 1 00:25:39.973 }, 00:25:39.973 { 00:25:39.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:39.973 "dma_device_type": 2 00:25:39.973 } 00:25:39.974 ], 00:25:39.974 "driver_specific": { 00:25:39.974 "passthru": { 00:25:39.974 "name": "pt2", 00:25:39.974 "base_bdev_name": "malloc2" 00:25:39.974 } 00:25:39.974 } 00:25:39.974 }' 00:25:39.974 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.974 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:39.974 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:39.974 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:39.974 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.232 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:40.232 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.232 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.232 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:40.232 12:07:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.232 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:40.232 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:40.232 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:40.232 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:40.232 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:40.489 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:40.489 "name": "pt3", 00:25:40.489 "aliases": [ 00:25:40.489 "8365022c-02b1-5192-a184-11d231a81a53" 00:25:40.489 ], 00:25:40.489 "product_name": "passthru", 00:25:40.489 "block_size": 512, 00:25:40.489 "num_blocks": 65536, 00:25:40.489 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:40.489 "assigned_rate_limits": { 00:25:40.489 "rw_ios_per_sec": 0, 00:25:40.489 "rw_mbytes_per_sec": 0, 00:25:40.489 "r_mbytes_per_sec": 0, 00:25:40.489 "w_mbytes_per_sec": 0 00:25:40.489 }, 00:25:40.489 "claimed": true, 00:25:40.489 "claim_type": "exclusive_write", 00:25:40.489 "zoned": false, 00:25:40.489 "supported_io_types": { 00:25:40.489 "read": true, 00:25:40.489 "write": true, 00:25:40.489 "unmap": true, 00:25:40.489 "write_zeroes": true, 00:25:40.489 "flush": true, 00:25:40.489 "reset": true, 00:25:40.489 "compare": false, 00:25:40.489 "compare_and_write": false, 00:25:40.489 "abort": true, 00:25:40.489 "nvme_admin": false, 00:25:40.489 "nvme_io": false 00:25:40.489 }, 00:25:40.489 "memory_domains": [ 00:25:40.489 { 00:25:40.489 "dma_device_id": "system", 00:25:40.489 "dma_device_type": 1 00:25:40.489 }, 00:25:40.489 { 00:25:40.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:40.489 "dma_device_type": 2 00:25:40.489 } 00:25:40.489 ], 00:25:40.489 "driver_specific": { 00:25:40.489 "passthru": { 00:25:40.489 "name": "pt3", 00:25:40.489 "base_bdev_name": "malloc3" 00:25:40.489 } 00:25:40.489 } 00:25:40.489 }' 00:25:40.489 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.746 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:40.746 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:40.746 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.746 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:40.746 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:40.747 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:40.747 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.004 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.004 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.004 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.004 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.004 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:41.004 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:41.004 12:07:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:41.262 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:41.262 "name": "pt4", 00:25:41.262 "aliases": [ 00:25:41.262 "5f2597d0-633b-5fbd-abf7-9e81e4e65421" 00:25:41.262 ], 00:25:41.262 "product_name": "passthru", 00:25:41.262 "block_size": 512, 00:25:41.262 "num_blocks": 65536, 00:25:41.262 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:41.262 "assigned_rate_limits": { 00:25:41.262 "rw_ios_per_sec": 0, 00:25:41.262 "rw_mbytes_per_sec": 0, 00:25:41.262 "r_mbytes_per_sec": 0, 00:25:41.262 "w_mbytes_per_sec": 0 00:25:41.262 }, 00:25:41.262 "claimed": true, 00:25:41.262 "claim_type": "exclusive_write", 00:25:41.262 "zoned": false, 00:25:41.262 "supported_io_types": { 00:25:41.262 "read": true, 00:25:41.262 "write": true, 00:25:41.262 "unmap": true, 00:25:41.262 "write_zeroes": true, 00:25:41.262 "flush": true, 00:25:41.262 "reset": true, 00:25:41.262 "compare": false, 00:25:41.262 "compare_and_write": false, 00:25:41.262 "abort": true, 00:25:41.262 "nvme_admin": false, 00:25:41.262 "nvme_io": false 00:25:41.263 }, 00:25:41.263 "memory_domains": [ 00:25:41.263 { 00:25:41.263 "dma_device_id": "system", 00:25:41.263 "dma_device_type": 1 00:25:41.263 }, 00:25:41.263 { 00:25:41.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:41.263 "dma_device_type": 2 00:25:41.263 } 00:25:41.263 ], 00:25:41.263 "driver_specific": { 00:25:41.263 "passthru": { 00:25:41.263 "name": "pt4", 00:25:41.263 "base_bdev_name": "malloc4" 00:25:41.263 } 00:25:41.263 } 00:25:41.263 }' 00:25:41.263 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.263 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:41.263 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:41.263 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.520 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:41.520 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:41.520 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.520 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:41.520 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:41.520 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.520 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:41.778 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:41.778 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:41.778 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:25:42.036 [2024-07-21 12:07:40.651182] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:42.036 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=98291dd4-90e2-4280-9612-104723ec7803 00:25:42.036 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 98291dd4-90e2-4280-9612-104723ec7803 ']' 00:25:42.036 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:42.294 [2024-07-21 12:07:40.938996] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:42.294 [2024-07-21 12:07:40.939356] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:42.294 [2024-07-21 12:07:40.939600] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:42.294 [2024-07-21 12:07:40.939843] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:42.294 [2024-07-21 12:07:40.939957] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:25:42.294 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.294 12:07:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:25:42.553 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:25:42.553 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:25:42.553 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:42.553 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:42.811 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:42.811 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:43.070 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:43.070 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:43.328 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:25:43.328 12:07:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:43.328 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:25:43.328 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:43.587 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:25:43.846 [2024-07-21 12:07:42.671348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:43.846 [2024-07-21 12:07:42.673828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:43.846 [2024-07-21 12:07:42.674059] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:25:43.846 [2024-07-21 12:07:42.674148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:25:43.846 [2024-07-21 12:07:42.674332] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:25:43.846 [2024-07-21 12:07:42.674542] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:25:43.846 [2024-07-21 12:07:42.674731] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:25:43.846 [2024-07-21 12:07:42.674920] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:25:43.846 [2024-07-21 12:07:42.675076] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:43.846 [2024-07-21 12:07:42.675215] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:25:43.846 request: 00:25:43.846 { 00:25:43.846 "name": "raid_bdev1", 00:25:43.846 "raid_level": "raid1", 00:25:43.846 "base_bdevs": [ 00:25:43.846 "malloc1", 00:25:43.846 "malloc2", 00:25:43.846 "malloc3", 00:25:43.846 "malloc4" 00:25:43.846 ], 00:25:43.846 "superblock": false, 00:25:43.846 "method": "bdev_raid_create", 00:25:43.846 "req_id": 1 00:25:43.846 } 00:25:43.846 Got JSON-RPC error response 00:25:43.846 response: 00:25:43.846 { 00:25:43.846 "code": -17, 00:25:43.846 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:25:43.846 } 00:25:43.846 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:25:43.846 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:43.846 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:43.846 12:07:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:43.846 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.846 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:25:44.412 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:25:44.412 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:25:44.412 12:07:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:44.412 [2024-07-21 12:07:43.191674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:44.412 [2024-07-21 12:07:43.192071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:44.412 [2024-07-21 12:07:43.192237] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:25:44.412 [2024-07-21 12:07:43.192375] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:44.412 [2024-07-21 12:07:43.195072] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:44.412 [2024-07-21 12:07:43.195284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:44.412 [2024-07-21 12:07:43.195503] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:44.412 [2024-07-21 12:07:43.195686] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:44.412 pt1 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.412 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:44.669 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:44.670 "name": "raid_bdev1", 00:25:44.670 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:44.670 "strip_size_kb": 0, 00:25:44.670 "state": "configuring", 00:25:44.670 "raid_level": "raid1", 00:25:44.670 "superblock": true, 00:25:44.670 "num_base_bdevs": 4, 00:25:44.670 "num_base_bdevs_discovered": 1, 00:25:44.670 "num_base_bdevs_operational": 4, 00:25:44.670 "base_bdevs_list": [ 00:25:44.670 { 00:25:44.670 "name": "pt1", 00:25:44.670 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:44.670 "is_configured": true, 00:25:44.670 "data_offset": 2048, 00:25:44.670 "data_size": 63488 00:25:44.670 }, 00:25:44.670 { 00:25:44.670 "name": null, 00:25:44.670 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:44.670 "is_configured": false, 00:25:44.670 "data_offset": 2048, 00:25:44.670 "data_size": 63488 00:25:44.670 }, 00:25:44.670 { 00:25:44.670 "name": null, 00:25:44.670 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:44.670 "is_configured": false, 00:25:44.670 "data_offset": 2048, 00:25:44.670 "data_size": 63488 00:25:44.670 }, 00:25:44.670 { 00:25:44.670 "name": null, 00:25:44.670 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:44.670 "is_configured": false, 00:25:44.670 "data_offset": 2048, 00:25:44.670 "data_size": 63488 00:25:44.670 } 00:25:44.670 ] 00:25:44.670 }' 00:25:44.670 12:07:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:44.670 12:07:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:45.602 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:25:45.602 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:45.602 [2024-07-21 12:07:44.372320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:45.602 [2024-07-21 12:07:44.372717] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:45.602 [2024-07-21 12:07:44.372810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:25:45.602 [2024-07-21 12:07:44.373070] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:45.602 [2024-07-21 12:07:44.373707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:45.602 [2024-07-21 12:07:44.373904] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:45.602 [2024-07-21 12:07:44.374122] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:45.602 [2024-07-21 12:07:44.374255] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:45.602 pt2 00:25:45.602 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:45.858 [2024-07-21 12:07:44.600443] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:45.858 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.115 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:46.115 "name": "raid_bdev1", 00:25:46.115 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:46.115 "strip_size_kb": 0, 00:25:46.115 "state": "configuring", 00:25:46.115 "raid_level": "raid1", 00:25:46.115 "superblock": true, 00:25:46.115 "num_base_bdevs": 4, 00:25:46.115 "num_base_bdevs_discovered": 1, 00:25:46.115 "num_base_bdevs_operational": 4, 00:25:46.115 "base_bdevs_list": [ 00:25:46.115 { 00:25:46.115 "name": "pt1", 00:25:46.115 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:46.116 "is_configured": true, 00:25:46.116 "data_offset": 2048, 00:25:46.116 "data_size": 63488 00:25:46.116 }, 00:25:46.116 { 00:25:46.116 "name": null, 00:25:46.116 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:46.116 "is_configured": false, 00:25:46.116 "data_offset": 2048, 00:25:46.116 "data_size": 63488 00:25:46.116 }, 00:25:46.116 { 00:25:46.116 "name": null, 00:25:46.116 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:46.116 "is_configured": false, 00:25:46.116 "data_offset": 2048, 00:25:46.116 "data_size": 63488 00:25:46.116 }, 00:25:46.116 { 00:25:46.116 "name": null, 00:25:46.116 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:46.116 "is_configured": false, 00:25:46.116 "data_offset": 2048, 00:25:46.116 "data_size": 63488 00:25:46.116 } 00:25:46.116 ] 00:25:46.116 }' 00:25:46.116 12:07:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:46.116 12:07:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:46.680 12:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:25:46.680 12:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:46.681 12:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:46.938 [2024-07-21 12:07:45.760704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:46.938 [2024-07-21 12:07:45.761053] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:46.938 [2024-07-21 12:07:45.761145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:25:46.938 [2024-07-21 12:07:45.761397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:46.938 [2024-07-21 12:07:45.762010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:46.938 [2024-07-21 12:07:45.762197] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:46.938 [2024-07-21 12:07:45.762410] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:46.938 [2024-07-21 12:07:45.762566] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:46.938 pt2 00:25:46.938 12:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:46.938 12:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:46.938 12:07:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:47.196 [2024-07-21 12:07:45.988735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:47.196 [2024-07-21 12:07:45.989103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.196 [2024-07-21 12:07:45.989273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:25:47.196 [2024-07-21 12:07:45.989410] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.196 [2024-07-21 12:07:45.990043] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.196 [2024-07-21 12:07:45.990239] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:47.196 [2024-07-21 12:07:45.990450] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:47.196 [2024-07-21 12:07:45.990599] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:47.196 pt3 00:25:47.196 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:47.196 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:47.196 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:47.454 [2024-07-21 12:07:46.208799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:47.454 [2024-07-21 12:07:46.209212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:47.454 [2024-07-21 12:07:46.209295] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:25:47.454 [2024-07-21 12:07:46.209527] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:47.454 [2024-07-21 12:07:46.210133] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:47.454 [2024-07-21 12:07:46.210345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:47.454 [2024-07-21 12:07:46.210567] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:47.454 [2024-07-21 12:07:46.210721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:47.454 [2024-07-21 12:07:46.211003] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:25:47.454 [2024-07-21 12:07:46.211140] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:47.454 [2024-07-21 12:07:46.211272] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:25:47.454 [2024-07-21 12:07:46.211756] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:25:47.454 [2024-07-21 12:07:46.211876] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:25:47.454 [2024-07-21 12:07:46.212082] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:47.454 pt4 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.454 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:47.712 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:47.712 "name": "raid_bdev1", 00:25:47.712 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:47.712 "strip_size_kb": 0, 00:25:47.712 "state": "online", 00:25:47.712 "raid_level": "raid1", 00:25:47.713 "superblock": true, 00:25:47.713 "num_base_bdevs": 4, 00:25:47.713 "num_base_bdevs_discovered": 4, 00:25:47.713 "num_base_bdevs_operational": 4, 00:25:47.713 "base_bdevs_list": [ 00:25:47.713 { 00:25:47.713 "name": "pt1", 00:25:47.713 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:47.713 "is_configured": true, 00:25:47.713 "data_offset": 2048, 00:25:47.713 "data_size": 63488 00:25:47.713 }, 00:25:47.713 { 00:25:47.713 "name": "pt2", 00:25:47.713 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:47.713 "is_configured": true, 00:25:47.713 "data_offset": 2048, 00:25:47.713 "data_size": 63488 00:25:47.713 }, 00:25:47.713 { 00:25:47.713 "name": "pt3", 00:25:47.713 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:47.713 "is_configured": true, 00:25:47.713 "data_offset": 2048, 00:25:47.713 "data_size": 63488 00:25:47.713 }, 00:25:47.713 { 00:25:47.713 "name": "pt4", 00:25:47.713 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:47.713 "is_configured": true, 00:25:47.713 "data_offset": 2048, 00:25:47.713 "data_size": 63488 00:25:47.713 } 00:25:47.713 ] 00:25:47.713 }' 00:25:47.713 12:07:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:47.713 12:07:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:48.277 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:25:48.536 [2024-07-21 12:07:47.349338] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:48.536 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:25:48.536 "name": "raid_bdev1", 00:25:48.536 "aliases": [ 00:25:48.536 "98291dd4-90e2-4280-9612-104723ec7803" 00:25:48.536 ], 00:25:48.536 "product_name": "Raid Volume", 00:25:48.536 "block_size": 512, 00:25:48.536 "num_blocks": 63488, 00:25:48.536 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:48.536 "assigned_rate_limits": { 00:25:48.536 "rw_ios_per_sec": 0, 00:25:48.536 "rw_mbytes_per_sec": 0, 00:25:48.536 "r_mbytes_per_sec": 0, 00:25:48.536 "w_mbytes_per_sec": 0 00:25:48.536 }, 00:25:48.536 "claimed": false, 00:25:48.536 "zoned": false, 00:25:48.536 "supported_io_types": { 00:25:48.536 "read": true, 00:25:48.536 "write": true, 00:25:48.536 "unmap": false, 00:25:48.536 "write_zeroes": true, 00:25:48.536 "flush": false, 00:25:48.536 "reset": true, 00:25:48.536 "compare": false, 00:25:48.536 "compare_and_write": false, 00:25:48.536 "abort": false, 00:25:48.536 "nvme_admin": false, 00:25:48.536 "nvme_io": false 00:25:48.536 }, 00:25:48.536 "memory_domains": [ 00:25:48.536 { 00:25:48.536 "dma_device_id": "system", 00:25:48.536 "dma_device_type": 1 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.536 "dma_device_type": 2 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "dma_device_id": "system", 00:25:48.536 "dma_device_type": 1 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.536 "dma_device_type": 2 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "dma_device_id": "system", 00:25:48.536 "dma_device_type": 1 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.536 "dma_device_type": 2 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "dma_device_id": "system", 00:25:48.536 "dma_device_type": 1 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.536 "dma_device_type": 2 00:25:48.536 } 00:25:48.536 ], 00:25:48.536 "driver_specific": { 00:25:48.536 "raid": { 00:25:48.536 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:48.536 "strip_size_kb": 0, 00:25:48.536 "state": "online", 00:25:48.536 "raid_level": "raid1", 00:25:48.536 "superblock": true, 00:25:48.536 "num_base_bdevs": 4, 00:25:48.536 "num_base_bdevs_discovered": 4, 00:25:48.536 "num_base_bdevs_operational": 4, 00:25:48.536 "base_bdevs_list": [ 00:25:48.536 { 00:25:48.536 "name": "pt1", 00:25:48.536 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:48.536 "is_configured": true, 00:25:48.536 "data_offset": 2048, 00:25:48.536 "data_size": 63488 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "name": "pt2", 00:25:48.536 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:48.536 "is_configured": true, 00:25:48.536 "data_offset": 2048, 00:25:48.536 "data_size": 63488 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "name": "pt3", 00:25:48.536 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:48.536 "is_configured": true, 00:25:48.536 "data_offset": 2048, 00:25:48.536 "data_size": 63488 00:25:48.536 }, 00:25:48.536 { 00:25:48.536 "name": "pt4", 00:25:48.536 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:48.536 "is_configured": true, 00:25:48.536 "data_offset": 2048, 00:25:48.536 "data_size": 63488 00:25:48.536 } 00:25:48.536 ] 00:25:48.536 } 00:25:48.536 } 00:25:48.536 }' 00:25:48.536 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:25:48.794 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:25:48.794 pt2 00:25:48.794 pt3 00:25:48.794 pt4' 00:25:48.794 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:48.794 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:25:48.794 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:48.794 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:48.794 "name": "pt1", 00:25:48.794 "aliases": [ 00:25:48.794 "7f836033-3e7b-5bbd-b53b-e3101fe28122" 00:25:48.794 ], 00:25:48.794 "product_name": "passthru", 00:25:48.794 "block_size": 512, 00:25:48.794 "num_blocks": 65536, 00:25:48.794 "uuid": "7f836033-3e7b-5bbd-b53b-e3101fe28122", 00:25:48.794 "assigned_rate_limits": { 00:25:48.794 "rw_ios_per_sec": 0, 00:25:48.794 "rw_mbytes_per_sec": 0, 00:25:48.794 "r_mbytes_per_sec": 0, 00:25:48.794 "w_mbytes_per_sec": 0 00:25:48.794 }, 00:25:48.794 "claimed": true, 00:25:48.794 "claim_type": "exclusive_write", 00:25:48.794 "zoned": false, 00:25:48.794 "supported_io_types": { 00:25:48.794 "read": true, 00:25:48.794 "write": true, 00:25:48.794 "unmap": true, 00:25:48.794 "write_zeroes": true, 00:25:48.794 "flush": true, 00:25:48.794 "reset": true, 00:25:48.794 "compare": false, 00:25:48.794 "compare_and_write": false, 00:25:48.794 "abort": true, 00:25:48.794 "nvme_admin": false, 00:25:48.794 "nvme_io": false 00:25:48.794 }, 00:25:48.794 "memory_domains": [ 00:25:48.794 { 00:25:48.794 "dma_device_id": "system", 00:25:48.794 "dma_device_type": 1 00:25:48.794 }, 00:25:48.794 { 00:25:48.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:48.794 "dma_device_type": 2 00:25:48.794 } 00:25:48.794 ], 00:25:48.794 "driver_specific": { 00:25:48.794 "passthru": { 00:25:48.794 "name": "pt1", 00:25:48.794 "base_bdev_name": "malloc1" 00:25:48.794 } 00:25:48.794 } 00:25:48.794 }' 00:25:48.794 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.052 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.052 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.052 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.052 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.052 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.052 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.052 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.310 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.310 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.310 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.310 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.310 12:07:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.310 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:49.310 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:25:49.568 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:49.568 "name": "pt2", 00:25:49.568 "aliases": [ 00:25:49.568 "54b3008f-f170-5eb1-b20f-c1ddb931f0ea" 00:25:49.568 ], 00:25:49.568 "product_name": "passthru", 00:25:49.568 "block_size": 512, 00:25:49.568 "num_blocks": 65536, 00:25:49.568 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:49.568 "assigned_rate_limits": { 00:25:49.568 "rw_ios_per_sec": 0, 00:25:49.568 "rw_mbytes_per_sec": 0, 00:25:49.568 "r_mbytes_per_sec": 0, 00:25:49.568 "w_mbytes_per_sec": 0 00:25:49.568 }, 00:25:49.568 "claimed": true, 00:25:49.568 "claim_type": "exclusive_write", 00:25:49.568 "zoned": false, 00:25:49.568 "supported_io_types": { 00:25:49.568 "read": true, 00:25:49.568 "write": true, 00:25:49.568 "unmap": true, 00:25:49.568 "write_zeroes": true, 00:25:49.568 "flush": true, 00:25:49.568 "reset": true, 00:25:49.568 "compare": false, 00:25:49.568 "compare_and_write": false, 00:25:49.568 "abort": true, 00:25:49.568 "nvme_admin": false, 00:25:49.568 "nvme_io": false 00:25:49.568 }, 00:25:49.568 "memory_domains": [ 00:25:49.568 { 00:25:49.568 "dma_device_id": "system", 00:25:49.568 "dma_device_type": 1 00:25:49.568 }, 00:25:49.568 { 00:25:49.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:49.568 "dma_device_type": 2 00:25:49.568 } 00:25:49.568 ], 00:25:49.568 "driver_specific": { 00:25:49.568 "passthru": { 00:25:49.568 "name": "pt2", 00:25:49.568 "base_bdev_name": "malloc2" 00:25:49.568 } 00:25:49.568 } 00:25:49.568 }' 00:25:49.568 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.568 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:49.568 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:49.568 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.568 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:25:49.826 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.084 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.084 "name": "pt3", 00:25:50.084 "aliases": [ 00:25:50.084 "8365022c-02b1-5192-a184-11d231a81a53" 00:25:50.084 ], 00:25:50.084 "product_name": "passthru", 00:25:50.084 "block_size": 512, 00:25:50.084 "num_blocks": 65536, 00:25:50.084 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:50.084 "assigned_rate_limits": { 00:25:50.084 "rw_ios_per_sec": 0, 00:25:50.084 "rw_mbytes_per_sec": 0, 00:25:50.084 "r_mbytes_per_sec": 0, 00:25:50.084 "w_mbytes_per_sec": 0 00:25:50.084 }, 00:25:50.084 "claimed": true, 00:25:50.084 "claim_type": "exclusive_write", 00:25:50.084 "zoned": false, 00:25:50.084 "supported_io_types": { 00:25:50.084 "read": true, 00:25:50.084 "write": true, 00:25:50.084 "unmap": true, 00:25:50.084 "write_zeroes": true, 00:25:50.084 "flush": true, 00:25:50.084 "reset": true, 00:25:50.084 "compare": false, 00:25:50.084 "compare_and_write": false, 00:25:50.084 "abort": true, 00:25:50.084 "nvme_admin": false, 00:25:50.084 "nvme_io": false 00:25:50.084 }, 00:25:50.084 "memory_domains": [ 00:25:50.084 { 00:25:50.084 "dma_device_id": "system", 00:25:50.084 "dma_device_type": 1 00:25:50.084 }, 00:25:50.084 { 00:25:50.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.084 "dma_device_type": 2 00:25:50.084 } 00:25:50.084 ], 00:25:50.084 "driver_specific": { 00:25:50.084 "passthru": { 00:25:50.084 "name": "pt3", 00:25:50.084 "base_bdev_name": "malloc3" 00:25:50.084 } 00:25:50.084 } 00:25:50.084 }' 00:25:50.084 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.342 12:07:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.342 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.342 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.342 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:50.342 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:50.342 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.342 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:50.342 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:50.599 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.599 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:50.599 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:50.599 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:25:50.599 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:25:50.599 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:25:50.885 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:25:50.885 "name": "pt4", 00:25:50.885 "aliases": [ 00:25:50.885 "5f2597d0-633b-5fbd-abf7-9e81e4e65421" 00:25:50.885 ], 00:25:50.885 "product_name": "passthru", 00:25:50.885 "block_size": 512, 00:25:50.885 "num_blocks": 65536, 00:25:50.885 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:50.885 "assigned_rate_limits": { 00:25:50.885 "rw_ios_per_sec": 0, 00:25:50.885 "rw_mbytes_per_sec": 0, 00:25:50.885 "r_mbytes_per_sec": 0, 00:25:50.885 "w_mbytes_per_sec": 0 00:25:50.885 }, 00:25:50.885 "claimed": true, 00:25:50.885 "claim_type": "exclusive_write", 00:25:50.885 "zoned": false, 00:25:50.885 "supported_io_types": { 00:25:50.885 "read": true, 00:25:50.885 "write": true, 00:25:50.885 "unmap": true, 00:25:50.885 "write_zeroes": true, 00:25:50.885 "flush": true, 00:25:50.885 "reset": true, 00:25:50.885 "compare": false, 00:25:50.885 "compare_and_write": false, 00:25:50.885 "abort": true, 00:25:50.885 "nvme_admin": false, 00:25:50.885 "nvme_io": false 00:25:50.885 }, 00:25:50.885 "memory_domains": [ 00:25:50.885 { 00:25:50.885 "dma_device_id": "system", 00:25:50.885 "dma_device_type": 1 00:25:50.885 }, 00:25:50.885 { 00:25:50.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:25:50.885 "dma_device_type": 2 00:25:50.885 } 00:25:50.885 ], 00:25:50.885 "driver_specific": { 00:25:50.885 "passthru": { 00:25:50.885 "name": "pt4", 00:25:50.885 "base_bdev_name": "malloc4" 00:25:50.885 } 00:25:50.885 } 00:25:50.885 }' 00:25:50.885 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.885 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:25:50.885 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:25:50.885 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:51.152 12:07:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:25:51.409 [2024-07-21 12:07:50.235607] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:51.409 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 98291dd4-90e2-4280-9612-104723ec7803 '!=' 98291dd4-90e2-4280-9612-104723ec7803 ']' 00:25:51.409 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:25:51.409 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:25:51.409 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:25:51.409 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:25:51.665 [2024-07-21 12:07:50.507431] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:51.665 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:51.666 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:51.666 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:51.666 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.923 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.923 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:51.923 "name": "raid_bdev1", 00:25:51.923 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:51.923 "strip_size_kb": 0, 00:25:51.923 "state": "online", 00:25:51.923 "raid_level": "raid1", 00:25:51.923 "superblock": true, 00:25:51.923 "num_base_bdevs": 4, 00:25:51.923 "num_base_bdevs_discovered": 3, 00:25:51.923 "num_base_bdevs_operational": 3, 00:25:51.923 "base_bdevs_list": [ 00:25:51.923 { 00:25:51.923 "name": null, 00:25:51.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:51.923 "is_configured": false, 00:25:51.923 "data_offset": 2048, 00:25:51.923 "data_size": 63488 00:25:51.923 }, 00:25:51.923 { 00:25:51.923 "name": "pt2", 00:25:51.923 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:51.923 "is_configured": true, 00:25:51.923 "data_offset": 2048, 00:25:51.923 "data_size": 63488 00:25:51.923 }, 00:25:51.923 { 00:25:51.923 "name": "pt3", 00:25:51.923 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:51.923 "is_configured": true, 00:25:51.923 "data_offset": 2048, 00:25:51.923 "data_size": 63488 00:25:51.923 }, 00:25:51.923 { 00:25:51.923 "name": "pt4", 00:25:51.923 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:51.923 "is_configured": true, 00:25:51.923 "data_offset": 2048, 00:25:51.923 "data_size": 63488 00:25:51.923 } 00:25:51.923 ] 00:25:51.923 }' 00:25:51.923 12:07:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:51.923 12:07:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.854 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:52.854 [2024-07-21 12:07:51.587595] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:52.854 [2024-07-21 12:07:51.587813] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:52.854 [2024-07-21 12:07:51.588018] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:52.854 [2024-07-21 12:07:51.588216] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:52.854 [2024-07-21 12:07:51.588336] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:25:52.854 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.854 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:25:53.111 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:25:53.111 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:25:53.111 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:25:53.111 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:53.111 12:07:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:25:53.368 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:53.368 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:53.368 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:25:53.625 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:53.625 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:53.625 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:53.883 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:25:53.883 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:25:53.883 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:25:53.883 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:53.883 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:25:54.140 [2024-07-21 12:07:52.767842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:25:54.140 [2024-07-21 12:07:52.768227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.140 [2024-07-21 12:07:52.768313] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:25:54.140 [2024-07-21 12:07:52.768626] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.140 [2024-07-21 12:07:52.771414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.140 [2024-07-21 12:07:52.771619] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:25:54.140 [2024-07-21 12:07:52.771847] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:25:54.140 [2024-07-21 12:07:52.772016] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:54.140 pt2 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:54.140 12:07:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:54.397 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:54.397 "name": "raid_bdev1", 00:25:54.397 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:54.397 "strip_size_kb": 0, 00:25:54.397 "state": "configuring", 00:25:54.397 "raid_level": "raid1", 00:25:54.397 "superblock": true, 00:25:54.397 "num_base_bdevs": 4, 00:25:54.397 "num_base_bdevs_discovered": 1, 00:25:54.397 "num_base_bdevs_operational": 3, 00:25:54.397 "base_bdevs_list": [ 00:25:54.397 { 00:25:54.397 "name": null, 00:25:54.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:54.397 "is_configured": false, 00:25:54.397 "data_offset": 2048, 00:25:54.397 "data_size": 63488 00:25:54.397 }, 00:25:54.397 { 00:25:54.397 "name": "pt2", 00:25:54.397 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:54.397 "is_configured": true, 00:25:54.397 "data_offset": 2048, 00:25:54.397 "data_size": 63488 00:25:54.397 }, 00:25:54.397 { 00:25:54.397 "name": null, 00:25:54.397 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:54.397 "is_configured": false, 00:25:54.397 "data_offset": 2048, 00:25:54.397 "data_size": 63488 00:25:54.397 }, 00:25:54.397 { 00:25:54.397 "name": null, 00:25:54.397 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:54.397 "is_configured": false, 00:25:54.397 "data_offset": 2048, 00:25:54.397 "data_size": 63488 00:25:54.397 } 00:25:54.397 ] 00:25:54.397 }' 00:25:54.397 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:54.397 12:07:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.961 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:25:54.961 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:54.961 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:25:55.217 [2024-07-21 12:07:53.839475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:25:55.217 [2024-07-21 12:07:53.840154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:55.217 [2024-07-21 12:07:53.840496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:25:55.217 [2024-07-21 12:07:53.840779] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:55.218 [2024-07-21 12:07:53.841584] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:55.218 [2024-07-21 12:07:53.841898] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:25:55.218 [2024-07-21 12:07:53.842245] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:25:55.218 [2024-07-21 12:07:53.842428] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:55.218 pt3 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:55.218 12:07:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:55.475 12:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:55.475 "name": "raid_bdev1", 00:25:55.475 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:55.475 "strip_size_kb": 0, 00:25:55.475 "state": "configuring", 00:25:55.475 "raid_level": "raid1", 00:25:55.475 "superblock": true, 00:25:55.475 "num_base_bdevs": 4, 00:25:55.475 "num_base_bdevs_discovered": 2, 00:25:55.475 "num_base_bdevs_operational": 3, 00:25:55.475 "base_bdevs_list": [ 00:25:55.475 { 00:25:55.475 "name": null, 00:25:55.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:55.475 "is_configured": false, 00:25:55.475 "data_offset": 2048, 00:25:55.475 "data_size": 63488 00:25:55.475 }, 00:25:55.475 { 00:25:55.475 "name": "pt2", 00:25:55.475 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:55.475 "is_configured": true, 00:25:55.475 "data_offset": 2048, 00:25:55.475 "data_size": 63488 00:25:55.475 }, 00:25:55.475 { 00:25:55.475 "name": "pt3", 00:25:55.475 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:55.475 "is_configured": true, 00:25:55.475 "data_offset": 2048, 00:25:55.475 "data_size": 63488 00:25:55.475 }, 00:25:55.475 { 00:25:55.475 "name": null, 00:25:55.475 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:55.475 "is_configured": false, 00:25:55.475 "data_offset": 2048, 00:25:55.475 "data_size": 63488 00:25:55.475 } 00:25:55.475 ] 00:25:55.475 }' 00:25:55.475 12:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:55.475 12:07:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:56.039 12:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:25:56.039 12:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:25:56.039 12:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:25:56.039 12:07:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:56.296 [2024-07-21 12:07:54.995749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:56.296 [2024-07-21 12:07:54.996640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:56.296 [2024-07-21 12:07:54.996969] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:56.296 [2024-07-21 12:07:54.997246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:56.296 [2024-07-21 12:07:54.998010] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:56.296 [2024-07-21 12:07:54.998296] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:56.296 [2024-07-21 12:07:54.998652] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:56.296 [2024-07-21 12:07:54.998815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:56.296 [2024-07-21 12:07:54.999080] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:25:56.296 [2024-07-21 12:07:54.999207] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:56.296 [2024-07-21 12:07:54.999335] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:25:56.296 [2024-07-21 12:07:54.999810] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:25:56.297 [2024-07-21 12:07:54.999951] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:25:56.297 [2024-07-21 12:07:55.000222] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:56.297 pt4 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:56.297 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:56.553 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:56.553 "name": "raid_bdev1", 00:25:56.553 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:56.553 "strip_size_kb": 0, 00:25:56.553 "state": "online", 00:25:56.553 "raid_level": "raid1", 00:25:56.553 "superblock": true, 00:25:56.553 "num_base_bdevs": 4, 00:25:56.553 "num_base_bdevs_discovered": 3, 00:25:56.553 "num_base_bdevs_operational": 3, 00:25:56.553 "base_bdevs_list": [ 00:25:56.553 { 00:25:56.553 "name": null, 00:25:56.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:56.553 "is_configured": false, 00:25:56.553 "data_offset": 2048, 00:25:56.553 "data_size": 63488 00:25:56.553 }, 00:25:56.553 { 00:25:56.553 "name": "pt2", 00:25:56.553 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:56.553 "is_configured": true, 00:25:56.553 "data_offset": 2048, 00:25:56.553 "data_size": 63488 00:25:56.553 }, 00:25:56.553 { 00:25:56.553 "name": "pt3", 00:25:56.553 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:56.553 "is_configured": true, 00:25:56.553 "data_offset": 2048, 00:25:56.553 "data_size": 63488 00:25:56.553 }, 00:25:56.553 { 00:25:56.553 "name": "pt4", 00:25:56.553 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:56.553 "is_configured": true, 00:25:56.553 "data_offset": 2048, 00:25:56.553 "data_size": 63488 00:25:56.553 } 00:25:56.553 ] 00:25:56.553 }' 00:25:56.553 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:56.553 12:07:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.117 12:07:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:57.374 [2024-07-21 12:07:56.168435] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:57.374 [2024-07-21 12:07:56.168782] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:57.374 [2024-07-21 12:07:56.168983] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:57.374 [2024-07-21 12:07:56.169198] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:57.374 [2024-07-21 12:07:56.169315] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:25:57.374 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:57.374 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:25:57.631 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:25:57.631 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:25:57.631 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:25:57.631 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:25:57.631 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:25:57.888 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:25:58.144 [2024-07-21 12:07:56.904566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:25:58.144 [2024-07-21 12:07:56.905247] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:58.144 [2024-07-21 12:07:56.905658] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:25:58.144 [2024-07-21 12:07:56.905926] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:58.144 [2024-07-21 12:07:56.908769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:58.144 [2024-07-21 12:07:56.909095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:25:58.144 [2024-07-21 12:07:56.909437] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:25:58.144 [2024-07-21 12:07:56.909599] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:25:58.144 [2024-07-21 12:07:56.909957] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:25:58.144 [2024-07-21 12:07:56.910090] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:58.144 [2024-07-21 12:07:56.910164] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:25:58.144 [2024-07-21 12:07:56.910388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:25:58.144 pt1 00:25:58.144 [2024-07-21 12:07:56.910722] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:58.144 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:58.145 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:58.145 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:58.145 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:58.145 12:07:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:58.401 12:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:58.401 "name": "raid_bdev1", 00:25:58.401 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:58.401 "strip_size_kb": 0, 00:25:58.401 "state": "configuring", 00:25:58.401 "raid_level": "raid1", 00:25:58.401 "superblock": true, 00:25:58.401 "num_base_bdevs": 4, 00:25:58.401 "num_base_bdevs_discovered": 2, 00:25:58.401 "num_base_bdevs_operational": 3, 00:25:58.401 "base_bdevs_list": [ 00:25:58.401 { 00:25:58.401 "name": null, 00:25:58.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:58.401 "is_configured": false, 00:25:58.401 "data_offset": 2048, 00:25:58.401 "data_size": 63488 00:25:58.401 }, 00:25:58.401 { 00:25:58.401 "name": "pt2", 00:25:58.401 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:58.401 "is_configured": true, 00:25:58.401 "data_offset": 2048, 00:25:58.401 "data_size": 63488 00:25:58.401 }, 00:25:58.401 { 00:25:58.401 "name": "pt3", 00:25:58.401 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:58.401 "is_configured": true, 00:25:58.401 "data_offset": 2048, 00:25:58.401 "data_size": 63488 00:25:58.401 }, 00:25:58.401 { 00:25:58.401 "name": null, 00:25:58.401 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:58.401 "is_configured": false, 00:25:58.401 "data_offset": 2048, 00:25:58.401 "data_size": 63488 00:25:58.401 } 00:25:58.401 ] 00:25:58.401 }' 00:25:58.401 12:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:58.401 12:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:58.964 12:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:25:58.964 12:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:25:59.220 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:25:59.220 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:25:59.478 [2024-07-21 12:07:58.257853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:25:59.478 [2024-07-21 12:07:58.258744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:59.478 [2024-07-21 12:07:58.259040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:25:59.478 [2024-07-21 12:07:58.259313] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:59.478 [2024-07-21 12:07:58.260050] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:59.478 [2024-07-21 12:07:58.260341] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:25:59.478 [2024-07-21 12:07:58.260697] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:25:59.478 [2024-07-21 12:07:58.260847] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:25:59.478 [2024-07-21 12:07:58.261108] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cc80 00:25:59.478 [2024-07-21 12:07:58.261235] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:25:59.478 [2024-07-21 12:07:58.261386] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:25:59.478 [2024-07-21 12:07:58.261864] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cc80 00:25:59.478 [2024-07-21 12:07:58.261992] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cc80 00:25:59.478 [2024-07-21 12:07:58.262260] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.478 pt4 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:59.478 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:59.735 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:25:59.736 "name": "raid_bdev1", 00:25:59.736 "uuid": "98291dd4-90e2-4280-9612-104723ec7803", 00:25:59.736 "strip_size_kb": 0, 00:25:59.736 "state": "online", 00:25:59.736 "raid_level": "raid1", 00:25:59.736 "superblock": true, 00:25:59.736 "num_base_bdevs": 4, 00:25:59.736 "num_base_bdevs_discovered": 3, 00:25:59.736 "num_base_bdevs_operational": 3, 00:25:59.736 "base_bdevs_list": [ 00:25:59.736 { 00:25:59.736 "name": null, 00:25:59.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:59.736 "is_configured": false, 00:25:59.736 "data_offset": 2048, 00:25:59.736 "data_size": 63488 00:25:59.736 }, 00:25:59.736 { 00:25:59.736 "name": "pt2", 00:25:59.736 "uuid": "54b3008f-f170-5eb1-b20f-c1ddb931f0ea", 00:25:59.736 "is_configured": true, 00:25:59.736 "data_offset": 2048, 00:25:59.736 "data_size": 63488 00:25:59.736 }, 00:25:59.736 { 00:25:59.736 "name": "pt3", 00:25:59.736 "uuid": "8365022c-02b1-5192-a184-11d231a81a53", 00:25:59.736 "is_configured": true, 00:25:59.736 "data_offset": 2048, 00:25:59.736 "data_size": 63488 00:25:59.736 }, 00:25:59.736 { 00:25:59.736 "name": "pt4", 00:25:59.736 "uuid": "5f2597d0-633b-5fbd-abf7-9e81e4e65421", 00:25:59.736 "is_configured": true, 00:25:59.736 "data_offset": 2048, 00:25:59.736 "data_size": 63488 00:25:59.736 } 00:25:59.736 ] 00:25:59.736 }' 00:25:59.736 12:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:25:59.736 12:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:00.312 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:26:00.312 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:26:00.570 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:26:00.570 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:26:00.570 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:00.827 [2024-07-21 12:07:59.630822] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 98291dd4-90e2-4280-9612-104723ec7803 '!=' 98291dd4-90e2-4280-9612-104723ec7803 ']' 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 153169 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 153169 ']' 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # kill -0 153169 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # uname 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 153169 00:26:00.827 killing process with pid 153169 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 153169' 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@965 -- # kill 153169 00:26:00.827 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # wait 153169 00:26:00.827 [2024-07-21 12:07:59.678052] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:00.827 [2024-07-21 12:07:59.678135] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:00.827 [2024-07-21 12:07:59.678249] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:00.827 [2024-07-21 12:07:59.678262] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state offline 00:26:01.084 [2024-07-21 12:07:59.722417] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.342 12:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:26:01.342 00:26:01.342 real 0m26.804s 00:26:01.342 user 0m50.862s 00:26:01.342 sys 0m3.223s 00:26:01.342 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:01.342 12:07:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.342 ************************************ 00:26:01.342 END TEST raid_superblock_test 00:26:01.342 ************************************ 00:26:01.342 12:08:00 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:26:01.342 12:08:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:01.342 12:08:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:01.342 12:08:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.342 ************************************ 00:26:01.342 START TEST raid_read_error_test 00:26:01.342 ************************************ 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 read 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:01.342 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.rKogu2Z5Nm 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=154026 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 154026 /var/tmp/spdk-raid.sock 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@827 -- # '[' -z 154026 ']' 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:01.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:01.343 12:08:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:01.343 [2024-07-21 12:08:00.095520] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:01.343 [2024-07-21 12:08:00.096012] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154026 ] 00:26:01.601 [2024-07-21 12:08:00.263981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.601 [2024-07-21 12:08:00.364797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.601 [2024-07-21 12:08:00.424337] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.531 12:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:02.531 12:08:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # return 0 00:26:02.531 12:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:02.531 12:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:02.531 BaseBdev1_malloc 00:26:02.531 12:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:02.789 true 00:26:02.789 12:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:03.046 [2024-07-21 12:08:01.826937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:03.046 [2024-07-21 12:08:01.827405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.046 [2024-07-21 12:08:01.827630] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:03.046 [2024-07-21 12:08:01.827809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.046 [2024-07-21 12:08:01.830813] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.046 [2024-07-21 12:08:01.831009] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:03.046 BaseBdev1 00:26:03.046 12:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:03.046 12:08:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:03.304 BaseBdev2_malloc 00:26:03.304 12:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:03.562 true 00:26:03.562 12:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:03.819 [2024-07-21 12:08:02.579070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:03.819 [2024-07-21 12:08:02.579424] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:03.819 [2024-07-21 12:08:02.579538] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:26:03.819 [2024-07-21 12:08:02.579792] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:03.819 [2024-07-21 12:08:02.582533] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:03.819 [2024-07-21 12:08:02.582762] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:03.819 BaseBdev2 00:26:03.819 12:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:03.819 12:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:04.078 BaseBdev3_malloc 00:26:04.078 12:08:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:04.335 true 00:26:04.335 12:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:04.591 [2024-07-21 12:08:03.339878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:04.591 [2024-07-21 12:08:03.341251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:04.591 [2024-07-21 12:08:03.341355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:04.591 [2024-07-21 12:08:03.341568] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:04.591 [2024-07-21 12:08:03.344283] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:04.591 [2024-07-21 12:08:03.344493] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:04.591 BaseBdev3 00:26:04.591 12:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:04.591 12:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:04.847 BaseBdev4_malloc 00:26:04.847 12:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:05.104 true 00:26:05.104 12:08:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:05.360 [2024-07-21 12:08:04.080155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:05.360 [2024-07-21 12:08:04.080526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:05.360 [2024-07-21 12:08:04.080699] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:26:05.360 [2024-07-21 12:08:04.080871] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:05.360 [2024-07-21 12:08:04.083725] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:05.360 [2024-07-21 12:08:04.083918] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:05.360 BaseBdev4 00:26:05.360 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:05.617 [2024-07-21 12:08:04.316421] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:05.617 [2024-07-21 12:08:04.319137] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:05.617 [2024-07-21 12:08:04.319373] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:05.617 [2024-07-21 12:08:04.319617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:05.617 [2024-07-21 12:08:04.320104] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:26:05.617 [2024-07-21 12:08:04.320239] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:05.617 [2024-07-21 12:08:04.320472] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:05.617 [2024-07-21 12:08:04.321085] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:26:05.617 [2024-07-21 12:08:04.321218] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:26:05.617 [2024-07-21 12:08:04.321550] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:05.617 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:05.874 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:05.875 "name": "raid_bdev1", 00:26:05.875 "uuid": "a1344b8c-33c7-4eba-9b15-3c76d86a7744", 00:26:05.875 "strip_size_kb": 0, 00:26:05.875 "state": "online", 00:26:05.875 "raid_level": "raid1", 00:26:05.875 "superblock": true, 00:26:05.875 "num_base_bdevs": 4, 00:26:05.875 "num_base_bdevs_discovered": 4, 00:26:05.875 "num_base_bdevs_operational": 4, 00:26:05.875 "base_bdevs_list": [ 00:26:05.875 { 00:26:05.875 "name": "BaseBdev1", 00:26:05.875 "uuid": "61fd4dca-cea4-5634-8f7b-11752f8cbfc5", 00:26:05.875 "is_configured": true, 00:26:05.875 "data_offset": 2048, 00:26:05.875 "data_size": 63488 00:26:05.875 }, 00:26:05.875 { 00:26:05.875 "name": "BaseBdev2", 00:26:05.875 "uuid": "c8e2590b-07a3-54b3-be24-d90d685a7054", 00:26:05.875 "is_configured": true, 00:26:05.875 "data_offset": 2048, 00:26:05.875 "data_size": 63488 00:26:05.875 }, 00:26:05.875 { 00:26:05.875 "name": "BaseBdev3", 00:26:05.875 "uuid": "c5e91a27-e431-5db0-ab91-0f24a53de1a8", 00:26:05.875 "is_configured": true, 00:26:05.875 "data_offset": 2048, 00:26:05.875 "data_size": 63488 00:26:05.875 }, 00:26:05.875 { 00:26:05.875 "name": "BaseBdev4", 00:26:05.875 "uuid": "e766cc66-e3f7-5fe5-88ee-2bfdb4d82a11", 00:26:05.875 "is_configured": true, 00:26:05.875 "data_offset": 2048, 00:26:05.875 "data_size": 63488 00:26:05.875 } 00:26:05.875 ] 00:26:05.875 }' 00:26:05.875 12:08:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:05.875 12:08:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.441 12:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:06.441 12:08:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:06.711 [2024-07-21 12:08:05.318256] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:07.657 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:07.915 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:07.915 "name": "raid_bdev1", 00:26:07.915 "uuid": "a1344b8c-33c7-4eba-9b15-3c76d86a7744", 00:26:07.915 "strip_size_kb": 0, 00:26:07.915 "state": "online", 00:26:07.915 "raid_level": "raid1", 00:26:07.915 "superblock": true, 00:26:07.915 "num_base_bdevs": 4, 00:26:07.915 "num_base_bdevs_discovered": 4, 00:26:07.915 "num_base_bdevs_operational": 4, 00:26:07.915 "base_bdevs_list": [ 00:26:07.915 { 00:26:07.915 "name": "BaseBdev1", 00:26:07.915 "uuid": "61fd4dca-cea4-5634-8f7b-11752f8cbfc5", 00:26:07.915 "is_configured": true, 00:26:07.915 "data_offset": 2048, 00:26:07.915 "data_size": 63488 00:26:07.915 }, 00:26:07.915 { 00:26:07.915 "name": "BaseBdev2", 00:26:07.915 "uuid": "c8e2590b-07a3-54b3-be24-d90d685a7054", 00:26:07.915 "is_configured": true, 00:26:07.915 "data_offset": 2048, 00:26:07.915 "data_size": 63488 00:26:07.915 }, 00:26:07.915 { 00:26:07.915 "name": "BaseBdev3", 00:26:07.915 "uuid": "c5e91a27-e431-5db0-ab91-0f24a53de1a8", 00:26:07.915 "is_configured": true, 00:26:07.915 "data_offset": 2048, 00:26:07.915 "data_size": 63488 00:26:07.915 }, 00:26:07.915 { 00:26:07.915 "name": "BaseBdev4", 00:26:07.915 "uuid": "e766cc66-e3f7-5fe5-88ee-2bfdb4d82a11", 00:26:07.915 "is_configured": true, 00:26:07.915 "data_offset": 2048, 00:26:07.915 "data_size": 63488 00:26:07.915 } 00:26:07.915 ] 00:26:07.915 }' 00:26:07.915 12:08:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:07.915 12:08:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.848 12:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:08.848 [2024-07-21 12:08:07.678543] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:08.848 [2024-07-21 12:08:07.678941] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:08.848 [2024-07-21 12:08:07.681944] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:08.848 [2024-07-21 12:08:07.682175] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:08.848 [2024-07-21 12:08:07.682357] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:08.848 [2024-07-21 12:08:07.682501] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:26:08.848 0 00:26:08.848 12:08:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 154026 00:26:08.848 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@946 -- # '[' -z 154026 ']' 00:26:08.848 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # kill -0 154026 00:26:08.848 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # uname 00:26:08.848 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:08.848 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 154026 00:26:09.106 killing process with pid 154026 00:26:09.106 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:09.106 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:09.106 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 154026' 00:26:09.106 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@965 -- # kill 154026 00:26:09.106 12:08:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # wait 154026 00:26:09.106 [2024-07-21 12:08:07.723315] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:09.106 [2024-07-21 12:08:07.760686] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.rKogu2Z5Nm 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:09.364 00:26:09.364 real 0m8.002s 00:26:09.364 user 0m13.216s 00:26:09.364 sys 0m1.005s 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:09.364 12:08:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.364 ************************************ 00:26:09.364 END TEST raid_read_error_test 00:26:09.364 ************************************ 00:26:09.364 12:08:08 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:26:09.364 12:08:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:26:09.364 12:08:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:09.364 12:08:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:09.364 ************************************ 00:26:09.364 START TEST raid_write_error_test 00:26:09.364 ************************************ 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1121 -- # raid_io_error_test raid1 4 write 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev1 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev2 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev3 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # echo BaseBdev4 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.iBGeXsgCSO 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=154231 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 154231 /var/tmp/spdk-raid.sock 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@827 -- # '[' -z 154231 ']' 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:09.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:09.364 12:08:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.364 [2024-07-21 12:08:08.160250] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:09.364 [2024-07-21 12:08:08.161318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154231 ] 00:26:09.622 [2024-07-21 12:08:08.331372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.622 [2024-07-21 12:08:08.432566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.879 [2024-07-21 12:08:08.492229] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:10.444 12:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:10.444 12:08:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # return 0 00:26:10.444 12:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:10.444 12:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:10.702 BaseBdev1_malloc 00:26:10.702 12:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:26:10.959 true 00:26:10.959 12:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:11.217 [2024-07-21 12:08:09.973709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:11.217 [2024-07-21 12:08:09.974051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.217 [2024-07-21 12:08:09.974281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:26:11.217 [2024-07-21 12:08:09.974463] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.217 [2024-07-21 12:08:09.977449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.217 [2024-07-21 12:08:09.977651] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:11.217 BaseBdev1 00:26:11.217 12:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:11.217 12:08:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:11.474 BaseBdev2_malloc 00:26:11.474 12:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:26:11.732 true 00:26:11.732 12:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:11.988 [2024-07-21 12:08:10.697728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:11.988 [2024-07-21 12:08:10.698152] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:11.988 [2024-07-21 12:08:10.698355] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:26:11.988 [2024-07-21 12:08:10.698517] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:11.988 [2024-07-21 12:08:10.701286] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:11.988 [2024-07-21 12:08:10.701501] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:11.988 BaseBdev2 00:26:11.988 12:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:11.988 12:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:26:12.245 BaseBdev3_malloc 00:26:12.245 12:08:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:26:12.501 true 00:26:12.501 12:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:26:12.758 [2024-07-21 12:08:11.459210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:26:12.758 [2024-07-21 12:08:11.459539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:12.758 [2024-07-21 12:08:11.459717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:12.758 [2024-07-21 12:08:11.459908] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:12.758 [2024-07-21 12:08:11.462711] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:12.758 [2024-07-21 12:08:11.462905] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:26:12.758 BaseBdev3 00:26:12.758 12:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:26:12.758 12:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:26:13.014 BaseBdev4_malloc 00:26:13.014 12:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:26:13.270 true 00:26:13.270 12:08:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:26:13.527 [2024-07-21 12:08:12.174550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:26:13.527 [2024-07-21 12:08:12.174977] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:13.527 [2024-07-21 12:08:12.175070] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:26:13.527 [2024-07-21 12:08:12.175358] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:13.527 [2024-07-21 12:08:12.178181] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:13.527 [2024-07-21 12:08:12.178373] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:26:13.527 BaseBdev4 00:26:13.527 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:26:13.784 [2024-07-21 12:08:12.474937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:13.784 [2024-07-21 12:08:12.477521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:13.784 [2024-07-21 12:08:12.477768] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:26:13.784 [2024-07-21 12:08:12.477981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:26:13.784 [2024-07-21 12:08:12.478411] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:26:13.784 [2024-07-21 12:08:12.478548] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:13.784 [2024-07-21 12:08:12.478809] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:13.784 [2024-07-21 12:08:12.479404] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:26:13.784 [2024-07-21 12:08:12.479539] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:26:13.784 [2024-07-21 12:08:12.479903] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:13.784 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:14.041 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:14.041 "name": "raid_bdev1", 00:26:14.041 "uuid": "69d912a6-3b72-463e-b748-df6fe6e5ea15", 00:26:14.041 "strip_size_kb": 0, 00:26:14.041 "state": "online", 00:26:14.041 "raid_level": "raid1", 00:26:14.041 "superblock": true, 00:26:14.041 "num_base_bdevs": 4, 00:26:14.041 "num_base_bdevs_discovered": 4, 00:26:14.041 "num_base_bdevs_operational": 4, 00:26:14.041 "base_bdevs_list": [ 00:26:14.041 { 00:26:14.041 "name": "BaseBdev1", 00:26:14.041 "uuid": "27739c30-38ce-5775-bef3-10b4d8997a4b", 00:26:14.041 "is_configured": true, 00:26:14.041 "data_offset": 2048, 00:26:14.041 "data_size": 63488 00:26:14.041 }, 00:26:14.041 { 00:26:14.041 "name": "BaseBdev2", 00:26:14.041 "uuid": "b811bf18-cd7b-502c-a5bb-d6b4e1ddda27", 00:26:14.041 "is_configured": true, 00:26:14.042 "data_offset": 2048, 00:26:14.042 "data_size": 63488 00:26:14.042 }, 00:26:14.042 { 00:26:14.042 "name": "BaseBdev3", 00:26:14.042 "uuid": "826319c8-3e3e-5889-b3d2-a7becdb0b99d", 00:26:14.042 "is_configured": true, 00:26:14.042 "data_offset": 2048, 00:26:14.042 "data_size": 63488 00:26:14.042 }, 00:26:14.042 { 00:26:14.042 "name": "BaseBdev4", 00:26:14.042 "uuid": "55eb4442-3b60-5bd3-8a40-b84a45a39050", 00:26:14.042 "is_configured": true, 00:26:14.042 "data_offset": 2048, 00:26:14.042 "data_size": 63488 00:26:14.042 } 00:26:14.042 ] 00:26:14.042 }' 00:26:14.042 12:08:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:14.042 12:08:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.613 12:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:26:14.613 12:08:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:26:14.613 [2024-07-21 12:08:13.464567] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:26:15.544 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:15.801 [2024-07-21 12:08:14.637561] bdev_raid.c:2221:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:26:15.801 [2024-07-21 12:08:14.637920] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:15.801 [2024-07-21 12:08:14.638340] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:15.801 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:16.365 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:16.365 "name": "raid_bdev1", 00:26:16.365 "uuid": "69d912a6-3b72-463e-b748-df6fe6e5ea15", 00:26:16.365 "strip_size_kb": 0, 00:26:16.365 "state": "online", 00:26:16.365 "raid_level": "raid1", 00:26:16.365 "superblock": true, 00:26:16.365 "num_base_bdevs": 4, 00:26:16.365 "num_base_bdevs_discovered": 3, 00:26:16.365 "num_base_bdevs_operational": 3, 00:26:16.365 "base_bdevs_list": [ 00:26:16.365 { 00:26:16.365 "name": null, 00:26:16.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.365 "is_configured": false, 00:26:16.365 "data_offset": 2048, 00:26:16.365 "data_size": 63488 00:26:16.365 }, 00:26:16.365 { 00:26:16.365 "name": "BaseBdev2", 00:26:16.365 "uuid": "b811bf18-cd7b-502c-a5bb-d6b4e1ddda27", 00:26:16.365 "is_configured": true, 00:26:16.365 "data_offset": 2048, 00:26:16.365 "data_size": 63488 00:26:16.365 }, 00:26:16.365 { 00:26:16.365 "name": "BaseBdev3", 00:26:16.365 "uuid": "826319c8-3e3e-5889-b3d2-a7becdb0b99d", 00:26:16.365 "is_configured": true, 00:26:16.365 "data_offset": 2048, 00:26:16.365 "data_size": 63488 00:26:16.365 }, 00:26:16.365 { 00:26:16.365 "name": "BaseBdev4", 00:26:16.365 "uuid": "55eb4442-3b60-5bd3-8a40-b84a45a39050", 00:26:16.365 "is_configured": true, 00:26:16.365 "data_offset": 2048, 00:26:16.365 "data_size": 63488 00:26:16.365 } 00:26:16.365 ] 00:26:16.365 }' 00:26:16.365 12:08:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:16.365 12:08:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:16.929 12:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:17.187 [2024-07-21 12:08:15.814933] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:17.187 [2024-07-21 12:08:15.815192] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:17.187 [2024-07-21 12:08:15.818345] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:17.187 [2024-07-21 12:08:15.818544] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.187 [2024-07-21 12:08:15.818811] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:17.187 [2024-07-21 12:08:15.818954] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:26:17.187 0 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 154231 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@946 -- # '[' -z 154231 ']' 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # kill -0 154231 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # uname 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 154231 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 154231' 00:26:17.187 killing process with pid 154231 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@965 -- # kill 154231 00:26:17.187 [2024-07-21 12:08:15.862173] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:17.187 12:08:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # wait 154231 00:26:17.187 [2024-07-21 12:08:15.902720] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.iBGeXsgCSO 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:26:17.445 00:26:17.445 real 0m8.092s 00:26:17.445 user 0m13.232s 00:26:17.445 sys 0m1.032s 00:26:17.445 ************************************ 00:26:17.445 END TEST raid_write_error_test 00:26:17.445 ************************************ 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:17.445 12:08:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.445 12:08:16 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' true = true ']' 00:26:17.445 12:08:16 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:26:17.445 12:08:16 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:26:17.445 12:08:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:26:17.445 12:08:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:17.445 12:08:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:17.445 ************************************ 00:26:17.445 START TEST raid_rebuild_test 00:26:17.445 ************************************ 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false false true 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=154440 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 154440 /var/tmp/spdk-raid.sock 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 154440 ']' 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:17.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:17.445 12:08:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:17.445 [2024-07-21 12:08:16.295164] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:17.445 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:17.445 Zero copy mechanism will not be used. 00:26:17.445 [2024-07-21 12:08:16.295367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154440 ] 00:26:17.702 [2024-07-21 12:08:16.454999] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.702 [2024-07-21 12:08:16.551382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.959 [2024-07-21 12:08:16.607210] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:18.524 12:08:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:18.524 12:08:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:26:18.524 12:08:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:18.524 12:08:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:18.781 BaseBdev1_malloc 00:26:18.781 12:08:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:19.039 [2024-07-21 12:08:17.758446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:19.039 [2024-07-21 12:08:17.758619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.039 [2024-07-21 12:08:17.758680] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:26:19.039 [2024-07-21 12:08:17.758755] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.039 [2024-07-21 12:08:17.761608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.039 [2024-07-21 12:08:17.761691] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:19.039 BaseBdev1 00:26:19.039 12:08:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:19.039 12:08:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:19.296 BaseBdev2_malloc 00:26:19.296 12:08:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:19.553 [2024-07-21 12:08:18.337754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:19.553 [2024-07-21 12:08:18.337870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:19.553 [2024-07-21 12:08:18.337959] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:19.553 [2024-07-21 12:08:18.338004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:19.553 [2024-07-21 12:08:18.340674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:19.553 [2024-07-21 12:08:18.340733] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:19.553 BaseBdev2 00:26:19.553 12:08:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:19.811 spare_malloc 00:26:19.811 12:08:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:20.377 spare_delay 00:26:20.377 12:08:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:20.377 [2024-07-21 12:08:19.167237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:20.377 [2024-07-21 12:08:19.167382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:20.377 [2024-07-21 12:08:19.167441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:20.377 [2024-07-21 12:08:19.167502] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:20.377 [2024-07-21 12:08:19.170150] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:20.377 [2024-07-21 12:08:19.170232] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:20.377 spare 00:26:20.377 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:20.636 [2024-07-21 12:08:19.431388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:20.636 [2024-07-21 12:08:19.433729] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:20.636 [2024-07-21 12:08:19.433875] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:26:20.636 [2024-07-21 12:08:19.433889] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:20.636 [2024-07-21 12:08:19.434094] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:20.636 [2024-07-21 12:08:19.434570] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:26:20.636 [2024-07-21 12:08:19.434614] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:26:20.636 [2024-07-21 12:08:19.434822] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:20.636 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:20.894 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:20.894 "name": "raid_bdev1", 00:26:20.894 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:20.894 "strip_size_kb": 0, 00:26:20.894 "state": "online", 00:26:20.894 "raid_level": "raid1", 00:26:20.894 "superblock": false, 00:26:20.894 "num_base_bdevs": 2, 00:26:20.894 "num_base_bdevs_discovered": 2, 00:26:20.894 "num_base_bdevs_operational": 2, 00:26:20.894 "base_bdevs_list": [ 00:26:20.894 { 00:26:20.894 "name": "BaseBdev1", 00:26:20.894 "uuid": "57b473ac-b4a9-5f09-bce8-c14488beb4cd", 00:26:20.894 "is_configured": true, 00:26:20.895 "data_offset": 0, 00:26:20.895 "data_size": 65536 00:26:20.895 }, 00:26:20.895 { 00:26:20.895 "name": "BaseBdev2", 00:26:20.895 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:20.895 "is_configured": true, 00:26:20.895 "data_offset": 0, 00:26:20.895 "data_size": 65536 00:26:20.895 } 00:26:20.895 ] 00:26:20.895 }' 00:26:20.895 12:08:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:20.895 12:08:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.827 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:21.827 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:21.827 [2024-07-21 12:08:20.571845] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.827 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:26:21.827 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:21.827 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.085 12:08:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:22.341 [2024-07-21 12:08:21.059764] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:22.341 /dev/nbd0 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:22.341 1+0 records in 00:26:22.341 1+0 records out 00:26:22.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343247 s, 11.9 MB/s 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:22.341 12:08:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:26:28.918 65536+0 records in 00:26:28.918 65536+0 records out 00:26:28.918 33554432 bytes (34 MB, 32 MiB) copied, 5.81884 s, 5.8 MB/s 00:26:28.918 12:08:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:28.918 12:08:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:28.918 12:08:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:28.918 12:08:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:28.918 12:08:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:28.918 12:08:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:28.918 12:08:26 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:28.918 [2024-07-21 12:08:27.234823] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:28.918 [2024-07-21 12:08:27.530536] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:28.918 "name": "raid_bdev1", 00:26:28.918 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:28.918 "strip_size_kb": 0, 00:26:28.918 "state": "online", 00:26:28.918 "raid_level": "raid1", 00:26:28.918 "superblock": false, 00:26:28.918 "num_base_bdevs": 2, 00:26:28.918 "num_base_bdevs_discovered": 1, 00:26:28.918 "num_base_bdevs_operational": 1, 00:26:28.918 "base_bdevs_list": [ 00:26:28.918 { 00:26:28.918 "name": null, 00:26:28.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:28.918 "is_configured": false, 00:26:28.918 "data_offset": 0, 00:26:28.918 "data_size": 65536 00:26:28.918 }, 00:26:28.918 { 00:26:28.918 "name": "BaseBdev2", 00:26:28.918 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:28.918 "is_configured": true, 00:26:28.918 "data_offset": 0, 00:26:28.918 "data_size": 65536 00:26:28.918 } 00:26:28.918 ] 00:26:28.918 }' 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:28.918 12:08:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.848 12:08:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:29.848 [2024-07-21 12:08:28.606792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:29.848 [2024-07-21 12:08:28.612468] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d096f0 00:26:29.848 [2024-07-21 12:08:28.614824] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:29.848 12:08:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:30.777 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:30.777 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:30.777 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:30.777 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:30.777 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:30.777 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:30.777 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.339 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:31.339 "name": "raid_bdev1", 00:26:31.339 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:31.339 "strip_size_kb": 0, 00:26:31.339 "state": "online", 00:26:31.339 "raid_level": "raid1", 00:26:31.339 "superblock": false, 00:26:31.339 "num_base_bdevs": 2, 00:26:31.339 "num_base_bdevs_discovered": 2, 00:26:31.339 "num_base_bdevs_operational": 2, 00:26:31.339 "process": { 00:26:31.339 "type": "rebuild", 00:26:31.339 "target": "spare", 00:26:31.339 "progress": { 00:26:31.339 "blocks": 24576, 00:26:31.339 "percent": 37 00:26:31.339 } 00:26:31.339 }, 00:26:31.339 "base_bdevs_list": [ 00:26:31.339 { 00:26:31.339 "name": "spare", 00:26:31.339 "uuid": "b6605057-a97d-5d63-8836-08640be2633b", 00:26:31.339 "is_configured": true, 00:26:31.339 "data_offset": 0, 00:26:31.339 "data_size": 65536 00:26:31.339 }, 00:26:31.339 { 00:26:31.339 "name": "BaseBdev2", 00:26:31.339 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:31.339 "is_configured": true, 00:26:31.339 "data_offset": 0, 00:26:31.339 "data_size": 65536 00:26:31.339 } 00:26:31.339 ] 00:26:31.339 }' 00:26:31.339 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:31.339 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:31.339 12:08:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:31.339 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:31.339 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:31.596 [2024-07-21 12:08:30.261019] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:31.596 [2024-07-21 12:08:30.327792] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:31.596 [2024-07-21 12:08:30.327953] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:31.596 [2024-07-21 12:08:30.327977] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:31.596 [2024-07-21 12:08:30.327987] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:31.596 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:31.854 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:31.854 "name": "raid_bdev1", 00:26:31.854 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:31.854 "strip_size_kb": 0, 00:26:31.854 "state": "online", 00:26:31.854 "raid_level": "raid1", 00:26:31.854 "superblock": false, 00:26:31.854 "num_base_bdevs": 2, 00:26:31.854 "num_base_bdevs_discovered": 1, 00:26:31.854 "num_base_bdevs_operational": 1, 00:26:31.854 "base_bdevs_list": [ 00:26:31.854 { 00:26:31.854 "name": null, 00:26:31.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:31.854 "is_configured": false, 00:26:31.854 "data_offset": 0, 00:26:31.854 "data_size": 65536 00:26:31.854 }, 00:26:31.854 { 00:26:31.854 "name": "BaseBdev2", 00:26:31.854 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:31.854 "is_configured": true, 00:26:31.854 "data_offset": 0, 00:26:31.854 "data_size": 65536 00:26:31.854 } 00:26:31.854 ] 00:26:31.854 }' 00:26:31.854 12:08:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:31.854 12:08:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.420 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:32.420 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:32.420 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:32.420 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:32.420 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:32.420 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:32.420 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.678 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:32.678 "name": "raid_bdev1", 00:26:32.678 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:32.678 "strip_size_kb": 0, 00:26:32.678 "state": "online", 00:26:32.678 "raid_level": "raid1", 00:26:32.678 "superblock": false, 00:26:32.678 "num_base_bdevs": 2, 00:26:32.678 "num_base_bdevs_discovered": 1, 00:26:32.678 "num_base_bdevs_operational": 1, 00:26:32.678 "base_bdevs_list": [ 00:26:32.678 { 00:26:32.678 "name": null, 00:26:32.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:32.678 "is_configured": false, 00:26:32.678 "data_offset": 0, 00:26:32.678 "data_size": 65536 00:26:32.678 }, 00:26:32.678 { 00:26:32.678 "name": "BaseBdev2", 00:26:32.678 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:32.678 "is_configured": true, 00:26:32.678 "data_offset": 0, 00:26:32.678 "data_size": 65536 00:26:32.678 } 00:26:32.678 ] 00:26:32.678 }' 00:26:32.678 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:32.935 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:32.935 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:32.935 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:32.936 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:33.194 [2024-07-21 12:08:31.878245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:33.194 [2024-07-21 12:08:31.883732] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09890 00:26:33.194 [2024-07-21 12:08:31.885982] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:33.194 12:08:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:34.128 12:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:34.128 12:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:34.128 12:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:34.128 12:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:34.128 12:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:34.128 12:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.128 12:08:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.385 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.385 "name": "raid_bdev1", 00:26:34.385 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:34.385 "strip_size_kb": 0, 00:26:34.385 "state": "online", 00:26:34.385 "raid_level": "raid1", 00:26:34.385 "superblock": false, 00:26:34.385 "num_base_bdevs": 2, 00:26:34.385 "num_base_bdevs_discovered": 2, 00:26:34.385 "num_base_bdevs_operational": 2, 00:26:34.385 "process": { 00:26:34.385 "type": "rebuild", 00:26:34.385 "target": "spare", 00:26:34.385 "progress": { 00:26:34.385 "blocks": 24576, 00:26:34.385 "percent": 37 00:26:34.385 } 00:26:34.385 }, 00:26:34.385 "base_bdevs_list": [ 00:26:34.385 { 00:26:34.385 "name": "spare", 00:26:34.385 "uuid": "b6605057-a97d-5d63-8836-08640be2633b", 00:26:34.385 "is_configured": true, 00:26:34.385 "data_offset": 0, 00:26:34.385 "data_size": 65536 00:26:34.385 }, 00:26:34.385 { 00:26:34.385 "name": "BaseBdev2", 00:26:34.385 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:34.385 "is_configured": true, 00:26:34.385 "data_offset": 0, 00:26:34.385 "data_size": 65536 00:26:34.385 } 00:26:34.385 ] 00:26:34.385 }' 00:26:34.385 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.385 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.385 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=798 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:34.642 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:34.900 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:34.900 "name": "raid_bdev1", 00:26:34.900 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:34.900 "strip_size_kb": 0, 00:26:34.900 "state": "online", 00:26:34.900 "raid_level": "raid1", 00:26:34.900 "superblock": false, 00:26:34.900 "num_base_bdevs": 2, 00:26:34.900 "num_base_bdevs_discovered": 2, 00:26:34.900 "num_base_bdevs_operational": 2, 00:26:34.900 "process": { 00:26:34.900 "type": "rebuild", 00:26:34.900 "target": "spare", 00:26:34.900 "progress": { 00:26:34.900 "blocks": 32768, 00:26:34.900 "percent": 50 00:26:34.900 } 00:26:34.900 }, 00:26:34.900 "base_bdevs_list": [ 00:26:34.900 { 00:26:34.900 "name": "spare", 00:26:34.900 "uuid": "b6605057-a97d-5d63-8836-08640be2633b", 00:26:34.900 "is_configured": true, 00:26:34.900 "data_offset": 0, 00:26:34.900 "data_size": 65536 00:26:34.900 }, 00:26:34.900 { 00:26:34.900 "name": "BaseBdev2", 00:26:34.900 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:34.900 "is_configured": true, 00:26:34.900 "data_offset": 0, 00:26:34.900 "data_size": 65536 00:26:34.900 } 00:26:34.900 ] 00:26:34.900 }' 00:26:34.900 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:34.900 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:34.900 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:34.900 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:34.900 12:08:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:35.832 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:36.089 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:36.089 "name": "raid_bdev1", 00:26:36.089 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:36.089 "strip_size_kb": 0, 00:26:36.089 "state": "online", 00:26:36.089 "raid_level": "raid1", 00:26:36.089 "superblock": false, 00:26:36.089 "num_base_bdevs": 2, 00:26:36.089 "num_base_bdevs_discovered": 2, 00:26:36.089 "num_base_bdevs_operational": 2, 00:26:36.089 "process": { 00:26:36.089 "type": "rebuild", 00:26:36.089 "target": "spare", 00:26:36.089 "progress": { 00:26:36.089 "blocks": 61440, 00:26:36.089 "percent": 93 00:26:36.089 } 00:26:36.089 }, 00:26:36.089 "base_bdevs_list": [ 00:26:36.089 { 00:26:36.089 "name": "spare", 00:26:36.089 "uuid": "b6605057-a97d-5d63-8836-08640be2633b", 00:26:36.089 "is_configured": true, 00:26:36.089 "data_offset": 0, 00:26:36.089 "data_size": 65536 00:26:36.089 }, 00:26:36.089 { 00:26:36.089 "name": "BaseBdev2", 00:26:36.089 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:36.089 "is_configured": true, 00:26:36.089 "data_offset": 0, 00:26:36.089 "data_size": 65536 00:26:36.089 } 00:26:36.089 ] 00:26:36.089 }' 00:26:36.089 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:36.347 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:36.347 12:08:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:36.347 12:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:36.347 12:08:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:36.347 [2024-07-21 12:08:35.108306] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:36.347 [2024-07-21 12:08:35.108411] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:36.347 [2024-07-21 12:08:35.108537] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.279 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:37.538 "name": "raid_bdev1", 00:26:37.538 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:37.538 "strip_size_kb": 0, 00:26:37.538 "state": "online", 00:26:37.538 "raid_level": "raid1", 00:26:37.538 "superblock": false, 00:26:37.538 "num_base_bdevs": 2, 00:26:37.538 "num_base_bdevs_discovered": 2, 00:26:37.538 "num_base_bdevs_operational": 2, 00:26:37.538 "base_bdevs_list": [ 00:26:37.538 { 00:26:37.538 "name": "spare", 00:26:37.538 "uuid": "b6605057-a97d-5d63-8836-08640be2633b", 00:26:37.538 "is_configured": true, 00:26:37.538 "data_offset": 0, 00:26:37.538 "data_size": 65536 00:26:37.538 }, 00:26:37.538 { 00:26:37.538 "name": "BaseBdev2", 00:26:37.538 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:37.538 "is_configured": true, 00:26:37.538 "data_offset": 0, 00:26:37.538 "data_size": 65536 00:26:37.538 } 00:26:37.538 ] 00:26:37.538 }' 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:37.538 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:37.796 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:37.796 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:38.053 "name": "raid_bdev1", 00:26:38.053 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:38.053 "strip_size_kb": 0, 00:26:38.053 "state": "online", 00:26:38.053 "raid_level": "raid1", 00:26:38.053 "superblock": false, 00:26:38.053 "num_base_bdevs": 2, 00:26:38.053 "num_base_bdevs_discovered": 2, 00:26:38.053 "num_base_bdevs_operational": 2, 00:26:38.053 "base_bdevs_list": [ 00:26:38.053 { 00:26:38.053 "name": "spare", 00:26:38.053 "uuid": "b6605057-a97d-5d63-8836-08640be2633b", 00:26:38.053 "is_configured": true, 00:26:38.053 "data_offset": 0, 00:26:38.053 "data_size": 65536 00:26:38.053 }, 00:26:38.053 { 00:26:38.053 "name": "BaseBdev2", 00:26:38.053 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:38.053 "is_configured": true, 00:26:38.053 "data_offset": 0, 00:26:38.053 "data_size": 65536 00:26:38.053 } 00:26:38.053 ] 00:26:38.053 }' 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:38.053 12:08:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:38.310 12:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:38.310 "name": "raid_bdev1", 00:26:38.310 "uuid": "cee03957-797f-466f-a320-3b79c0e44849", 00:26:38.310 "strip_size_kb": 0, 00:26:38.310 "state": "online", 00:26:38.310 "raid_level": "raid1", 00:26:38.310 "superblock": false, 00:26:38.310 "num_base_bdevs": 2, 00:26:38.310 "num_base_bdevs_discovered": 2, 00:26:38.310 "num_base_bdevs_operational": 2, 00:26:38.310 "base_bdevs_list": [ 00:26:38.310 { 00:26:38.310 "name": "spare", 00:26:38.310 "uuid": "b6605057-a97d-5d63-8836-08640be2633b", 00:26:38.310 "is_configured": true, 00:26:38.310 "data_offset": 0, 00:26:38.310 "data_size": 65536 00:26:38.310 }, 00:26:38.310 { 00:26:38.310 "name": "BaseBdev2", 00:26:38.310 "uuid": "b6814e2f-6876-5138-876a-0a68586a460f", 00:26:38.310 "is_configured": true, 00:26:38.310 "data_offset": 0, 00:26:38.310 "data_size": 65536 00:26:38.310 } 00:26:38.310 ] 00:26:38.310 }' 00:26:38.310 12:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:38.310 12:08:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.874 12:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:26:39.130 [2024-07-21 12:08:37.866816] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:39.130 [2024-07-21 12:08:37.866867] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:39.130 [2024-07-21 12:08:37.866984] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:39.131 [2024-07-21 12:08:37.867095] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:39.131 [2024-07-21 12:08:37.867110] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:26:39.131 12:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:26:39.131 12:08:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:39.388 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:26:39.645 /dev/nbd0 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.645 1+0 records in 00:26:39.645 1+0 records out 00:26:39.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570285 s, 7.2 MB/s 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:39.645 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:26:39.901 /dev/nbd1 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:39.901 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:39.901 1+0 records in 00:26:39.901 1+0 records out 00:26:40.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00066224 s, 6.2 MB/s 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:40.157 12:08:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:40.414 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 154440 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 154440 ']' 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 154440 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 154440 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 154440' 00:26:40.671 killing process with pid 154440 00:26:40.671 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 154440 00:26:40.671 Received shutdown signal, test time was about 60.000000 seconds 00:26:40.671 00:26:40.671 Latency(us) 00:26:40.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.671 =================================================================================================================== 00:26:40.671 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:40.672 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 154440 00:26:40.672 [2024-07-21 12:08:39.482353] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:40.672 [2024-07-21 12:08:39.514553] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:26:41.236 00:26:41.236 real 0m23.569s 00:26:41.236 user 0m33.303s 00:26:41.236 sys 0m4.102s 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:26:41.236 ************************************ 00:26:41.236 END TEST raid_rebuild_test 00:26:41.236 ************************************ 00:26:41.236 12:08:39 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:26:41.236 12:08:39 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:26:41.236 12:08:39 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.236 12:08:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:41.236 ************************************ 00:26:41.236 START TEST raid_rebuild_test_sb 00:26:41.236 ************************************ 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=154992 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 154992 /var/tmp/spdk-raid.sock 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 154992 ']' 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:26:41.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:41.236 12:08:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.236 [2024-07-21 12:08:39.938220] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:41.236 [2024-07-21 12:08:39.938760] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154992 ] 00:26:41.236 I/O size of 3145728 is greater than zero copy threshold (65536). 00:26:41.236 Zero copy mechanism will not be used. 00:26:41.236 [2024-07-21 12:08:40.100198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.500 [2024-07-21 12:08:40.189963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.500 [2024-07-21 12:08:40.244860] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:42.077 12:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:42.077 12:08:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:26:42.077 12:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:42.077 12:08:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:42.335 BaseBdev1_malloc 00:26:42.335 12:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:26:42.592 [2024-07-21 12:08:41.428747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:26:42.592 [2024-07-21 12:08:41.429745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:42.592 [2024-07-21 12:08:41.430104] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:26:42.592 [2024-07-21 12:08:41.430452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:42.592 [2024-07-21 12:08:41.433458] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:42.592 [2024-07-21 12:08:41.433789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:42.592 BaseBdev1 00:26:42.592 12:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:26:42.592 12:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:42.850 BaseBdev2_malloc 00:26:42.850 12:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:26:43.108 [2024-07-21 12:08:41.901706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:26:43.108 [2024-07-21 12:08:41.902358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.108 [2024-07-21 12:08:41.902743] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:43.108 [2024-07-21 12:08:41.903045] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.108 [2024-07-21 12:08:41.905833] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.108 [2024-07-21 12:08:41.906175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:43.108 BaseBdev2 00:26:43.108 12:08:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:26:43.366 spare_malloc 00:26:43.366 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:26:43.624 spare_delay 00:26:43.624 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:26:43.882 [2024-07-21 12:08:42.635536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:26:43.882 [2024-07-21 12:08:42.636407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:43.882 [2024-07-21 12:08:42.636776] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:26:43.882 [2024-07-21 12:08:42.637111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:43.882 [2024-07-21 12:08:42.640002] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:43.882 [2024-07-21 12:08:42.640337] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:26:43.882 spare 00:26:43.882 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:26:44.140 [2024-07-21 12:08:42.905017] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:44.140 [2024-07-21 12:08:42.907598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:44.140 [2024-07-21 12:08:42.908010] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:26:44.140 [2024-07-21 12:08:42.908150] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:26:44.140 [2024-07-21 12:08:42.908383] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:26:44.140 [2024-07-21 12:08:42.909017] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:26:44.140 [2024-07-21 12:08:42.909171] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:26:44.140 [2024-07-21 12:08:42.909522] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:44.140 12:08:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:44.398 12:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:44.398 "name": "raid_bdev1", 00:26:44.398 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:44.398 "strip_size_kb": 0, 00:26:44.398 "state": "online", 00:26:44.398 "raid_level": "raid1", 00:26:44.398 "superblock": true, 00:26:44.398 "num_base_bdevs": 2, 00:26:44.398 "num_base_bdevs_discovered": 2, 00:26:44.398 "num_base_bdevs_operational": 2, 00:26:44.398 "base_bdevs_list": [ 00:26:44.398 { 00:26:44.398 "name": "BaseBdev1", 00:26:44.398 "uuid": "e8b7ae0a-1cb3-5e53-9b82-9818d851e421", 00:26:44.398 "is_configured": true, 00:26:44.398 "data_offset": 2048, 00:26:44.398 "data_size": 63488 00:26:44.398 }, 00:26:44.398 { 00:26:44.398 "name": "BaseBdev2", 00:26:44.398 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:44.398 "is_configured": true, 00:26:44.398 "data_offset": 2048, 00:26:44.398 "data_size": 63488 00:26:44.398 } 00:26:44.398 ] 00:26:44.398 }' 00:26:44.398 12:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:44.398 12:08:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:45.329 12:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:26:45.329 12:08:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:26:45.329 [2024-07-21 12:08:44.081977] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:45.329 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:26:45.329 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:45.329 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:45.587 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:26:45.844 [2024-07-21 12:08:44.573923] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:26:45.844 /dev/nbd0 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:45.844 1+0 records in 00:26:45.844 1+0 records out 00:26:45.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864457 s, 4.7 MB/s 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:26:45.844 12:08:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:26:52.394 63488+0 records in 00:26:52.394 63488+0 records out 00:26:52.394 32505856 bytes (33 MB, 31 MiB) copied, 5.61786 s, 5.8 MB/s 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:52.394 [2024-07-21 12:08:50.529787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:26:52.394 [2024-07-21 12:08:50.733231] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:52.394 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:52.395 "name": "raid_bdev1", 00:26:52.395 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:52.395 "strip_size_kb": 0, 00:26:52.395 "state": "online", 00:26:52.395 "raid_level": "raid1", 00:26:52.395 "superblock": true, 00:26:52.395 "num_base_bdevs": 2, 00:26:52.395 "num_base_bdevs_discovered": 1, 00:26:52.395 "num_base_bdevs_operational": 1, 00:26:52.395 "base_bdevs_list": [ 00:26:52.395 { 00:26:52.395 "name": null, 00:26:52.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:52.395 "is_configured": false, 00:26:52.395 "data_offset": 2048, 00:26:52.395 "data_size": 63488 00:26:52.395 }, 00:26:52.395 { 00:26:52.395 "name": "BaseBdev2", 00:26:52.395 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:52.395 "is_configured": true, 00:26:52.395 "data_offset": 2048, 00:26:52.395 "data_size": 63488 00:26:52.395 } 00:26:52.395 ] 00:26:52.395 }' 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:52.395 12:08:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:52.962 12:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:53.219 [2024-07-21 12:08:51.885473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:53.219 [2024-07-21 12:08:51.893529] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca2e80 00:26:53.219 [2024-07-21 12:08:51.896253] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:53.219 12:08:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:26:54.151 12:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:54.151 12:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:54.151 12:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:54.151 12:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:54.151 12:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:54.151 12:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:54.151 12:08:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.409 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:54.409 "name": "raid_bdev1", 00:26:54.409 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:54.409 "strip_size_kb": 0, 00:26:54.409 "state": "online", 00:26:54.409 "raid_level": "raid1", 00:26:54.409 "superblock": true, 00:26:54.409 "num_base_bdevs": 2, 00:26:54.409 "num_base_bdevs_discovered": 2, 00:26:54.409 "num_base_bdevs_operational": 2, 00:26:54.409 "process": { 00:26:54.409 "type": "rebuild", 00:26:54.409 "target": "spare", 00:26:54.409 "progress": { 00:26:54.409 "blocks": 24576, 00:26:54.409 "percent": 38 00:26:54.409 } 00:26:54.409 }, 00:26:54.409 "base_bdevs_list": [ 00:26:54.409 { 00:26:54.409 "name": "spare", 00:26:54.409 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:26:54.409 "is_configured": true, 00:26:54.409 "data_offset": 2048, 00:26:54.409 "data_size": 63488 00:26:54.409 }, 00:26:54.409 { 00:26:54.409 "name": "BaseBdev2", 00:26:54.409 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:54.409 "is_configured": true, 00:26:54.409 "data_offset": 2048, 00:26:54.409 "data_size": 63488 00:26:54.409 } 00:26:54.409 ] 00:26:54.409 }' 00:26:54.409 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:54.409 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:54.409 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:54.409 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:54.409 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:26:54.666 [2024-07-21 12:08:53.515494] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:54.923 [2024-07-21 12:08:53.612963] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:26:54.923 [2024-07-21 12:08:53.613974] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.923 [2024-07-21 12:08:53.614128] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:26:54.923 [2024-07-21 12:08:53.614179] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.923 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.181 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:26:55.181 "name": "raid_bdev1", 00:26:55.181 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:55.181 "strip_size_kb": 0, 00:26:55.181 "state": "online", 00:26:55.181 "raid_level": "raid1", 00:26:55.181 "superblock": true, 00:26:55.181 "num_base_bdevs": 2, 00:26:55.181 "num_base_bdevs_discovered": 1, 00:26:55.181 "num_base_bdevs_operational": 1, 00:26:55.181 "base_bdevs_list": [ 00:26:55.181 { 00:26:55.181 "name": null, 00:26:55.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:55.181 "is_configured": false, 00:26:55.181 "data_offset": 2048, 00:26:55.181 "data_size": 63488 00:26:55.181 }, 00:26:55.181 { 00:26:55.181 "name": "BaseBdev2", 00:26:55.181 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:55.181 "is_configured": true, 00:26:55.181 "data_offset": 2048, 00:26:55.181 "data_size": 63488 00:26:55.181 } 00:26:55.181 ] 00:26:55.181 }' 00:26:55.181 12:08:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:26:55.181 12:08:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:55.745 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:26:55.745 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:55.745 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:26:55.745 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:26:55.745 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:55.745 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:55.745 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.003 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:56.003 "name": "raid_bdev1", 00:26:56.003 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:56.003 "strip_size_kb": 0, 00:26:56.003 "state": "online", 00:26:56.003 "raid_level": "raid1", 00:26:56.003 "superblock": true, 00:26:56.003 "num_base_bdevs": 2, 00:26:56.003 "num_base_bdevs_discovered": 1, 00:26:56.003 "num_base_bdevs_operational": 1, 00:26:56.003 "base_bdevs_list": [ 00:26:56.004 { 00:26:56.004 "name": null, 00:26:56.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:56.004 "is_configured": false, 00:26:56.004 "data_offset": 2048, 00:26:56.004 "data_size": 63488 00:26:56.004 }, 00:26:56.004 { 00:26:56.004 "name": "BaseBdev2", 00:26:56.004 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:56.004 "is_configured": true, 00:26:56.004 "data_offset": 2048, 00:26:56.004 "data_size": 63488 00:26:56.004 } 00:26:56.004 ] 00:26:56.004 }' 00:26:56.004 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:56.004 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:26:56.004 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:56.261 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:26:56.261 12:08:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:26:56.518 [2024-07-21 12:08:55.151215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:26:56.518 [2024-07-21 12:08:55.159418] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3020 00:26:56.518 [2024-07-21 12:08:55.162144] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:26:56.518 12:08:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:26:57.450 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:57.450 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:57.450 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:57.450 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:57.450 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:57.450 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.450 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:57.707 "name": "raid_bdev1", 00:26:57.707 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:57.707 "strip_size_kb": 0, 00:26:57.707 "state": "online", 00:26:57.707 "raid_level": "raid1", 00:26:57.707 "superblock": true, 00:26:57.707 "num_base_bdevs": 2, 00:26:57.707 "num_base_bdevs_discovered": 2, 00:26:57.707 "num_base_bdevs_operational": 2, 00:26:57.707 "process": { 00:26:57.707 "type": "rebuild", 00:26:57.707 "target": "spare", 00:26:57.707 "progress": { 00:26:57.707 "blocks": 24576, 00:26:57.707 "percent": 38 00:26:57.707 } 00:26:57.707 }, 00:26:57.707 "base_bdevs_list": [ 00:26:57.707 { 00:26:57.707 "name": "spare", 00:26:57.707 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:26:57.707 "is_configured": true, 00:26:57.707 "data_offset": 2048, 00:26:57.707 "data_size": 63488 00:26:57.707 }, 00:26:57.707 { 00:26:57.707 "name": "BaseBdev2", 00:26:57.707 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:57.707 "is_configured": true, 00:26:57.707 "data_offset": 2048, 00:26:57.707 "data_size": 63488 00:26:57.707 } 00:26:57.707 ] 00:26:57.707 }' 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:26:57.707 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=821 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:57.707 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:57.965 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:57.965 "name": "raid_bdev1", 00:26:57.965 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:57.965 "strip_size_kb": 0, 00:26:57.965 "state": "online", 00:26:57.965 "raid_level": "raid1", 00:26:57.965 "superblock": true, 00:26:57.965 "num_base_bdevs": 2, 00:26:57.965 "num_base_bdevs_discovered": 2, 00:26:57.965 "num_base_bdevs_operational": 2, 00:26:57.965 "process": { 00:26:57.965 "type": "rebuild", 00:26:57.965 "target": "spare", 00:26:57.965 "progress": { 00:26:57.965 "blocks": 32768, 00:26:57.965 "percent": 51 00:26:57.965 } 00:26:57.965 }, 00:26:57.965 "base_bdevs_list": [ 00:26:57.965 { 00:26:57.965 "name": "spare", 00:26:57.965 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:26:57.965 "is_configured": true, 00:26:57.965 "data_offset": 2048, 00:26:57.965 "data_size": 63488 00:26:57.965 }, 00:26:57.965 { 00:26:57.965 "name": "BaseBdev2", 00:26:57.965 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:57.965 "is_configured": true, 00:26:57.965 "data_offset": 2048, 00:26:57.965 "data_size": 63488 00:26:57.965 } 00:26:57.965 ] 00:26:57.965 }' 00:26:57.965 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:58.223 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:58.223 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:58.223 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:58.223 12:08:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:26:59.156 12:08:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:59.415 12:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:59.415 "name": "raid_bdev1", 00:26:59.415 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:26:59.415 "strip_size_kb": 0, 00:26:59.415 "state": "online", 00:26:59.415 "raid_level": "raid1", 00:26:59.415 "superblock": true, 00:26:59.415 "num_base_bdevs": 2, 00:26:59.415 "num_base_bdevs_discovered": 2, 00:26:59.415 "num_base_bdevs_operational": 2, 00:26:59.415 "process": { 00:26:59.415 "type": "rebuild", 00:26:59.415 "target": "spare", 00:26:59.415 "progress": { 00:26:59.415 "blocks": 59392, 00:26:59.415 "percent": 93 00:26:59.415 } 00:26:59.415 }, 00:26:59.415 "base_bdevs_list": [ 00:26:59.415 { 00:26:59.415 "name": "spare", 00:26:59.415 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:26:59.415 "is_configured": true, 00:26:59.415 "data_offset": 2048, 00:26:59.415 "data_size": 63488 00:26:59.415 }, 00:26:59.415 { 00:26:59.415 "name": "BaseBdev2", 00:26:59.415 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:26:59.415 "is_configured": true, 00:26:59.415 "data_offset": 2048, 00:26:59.415 "data_size": 63488 00:26:59.415 } 00:26:59.415 ] 00:26:59.415 }' 00:26:59.415 12:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:26:59.415 12:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:26:59.415 12:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:26:59.672 [2024-07-21 12:08:58.283577] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:26:59.672 [2024-07-21 12:08:58.283910] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:26:59.672 [2024-07-21 12:08:58.284796] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:59.672 12:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:26:59.672 12:08:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.607 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:00.868 "name": "raid_bdev1", 00:27:00.868 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:00.868 "strip_size_kb": 0, 00:27:00.868 "state": "online", 00:27:00.868 "raid_level": "raid1", 00:27:00.868 "superblock": true, 00:27:00.868 "num_base_bdevs": 2, 00:27:00.868 "num_base_bdevs_discovered": 2, 00:27:00.868 "num_base_bdevs_operational": 2, 00:27:00.868 "base_bdevs_list": [ 00:27:00.868 { 00:27:00.868 "name": "spare", 00:27:00.868 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:27:00.868 "is_configured": true, 00:27:00.868 "data_offset": 2048, 00:27:00.868 "data_size": 63488 00:27:00.868 }, 00:27:00.868 { 00:27:00.868 "name": "BaseBdev2", 00:27:00.868 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:00.868 "is_configured": true, 00:27:00.868 "data_offset": 2048, 00:27:00.868 "data_size": 63488 00:27:00.868 } 00:27:00.868 ] 00:27:00.868 }' 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:00.868 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.145 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:01.145 "name": "raid_bdev1", 00:27:01.145 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:01.145 "strip_size_kb": 0, 00:27:01.145 "state": "online", 00:27:01.145 "raid_level": "raid1", 00:27:01.145 "superblock": true, 00:27:01.145 "num_base_bdevs": 2, 00:27:01.145 "num_base_bdevs_discovered": 2, 00:27:01.145 "num_base_bdevs_operational": 2, 00:27:01.145 "base_bdevs_list": [ 00:27:01.145 { 00:27:01.145 "name": "spare", 00:27:01.145 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:27:01.145 "is_configured": true, 00:27:01.145 "data_offset": 2048, 00:27:01.145 "data_size": 63488 00:27:01.145 }, 00:27:01.145 { 00:27:01.145 "name": "BaseBdev2", 00:27:01.145 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:01.145 "is_configured": true, 00:27:01.145 "data_offset": 2048, 00:27:01.145 "data_size": 63488 00:27:01.145 } 00:27:01.145 ] 00:27:01.145 }' 00:27:01.145 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:01.145 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:01.145 12:08:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:01.418 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:01.676 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:01.676 "name": "raid_bdev1", 00:27:01.676 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:01.676 "strip_size_kb": 0, 00:27:01.676 "state": "online", 00:27:01.676 "raid_level": "raid1", 00:27:01.677 "superblock": true, 00:27:01.677 "num_base_bdevs": 2, 00:27:01.677 "num_base_bdevs_discovered": 2, 00:27:01.677 "num_base_bdevs_operational": 2, 00:27:01.677 "base_bdevs_list": [ 00:27:01.677 { 00:27:01.677 "name": "spare", 00:27:01.677 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:27:01.677 "is_configured": true, 00:27:01.677 "data_offset": 2048, 00:27:01.677 "data_size": 63488 00:27:01.677 }, 00:27:01.677 { 00:27:01.677 "name": "BaseBdev2", 00:27:01.677 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:01.677 "is_configured": true, 00:27:01.677 "data_offset": 2048, 00:27:01.677 "data_size": 63488 00:27:01.677 } 00:27:01.677 ] 00:27:01.677 }' 00:27:01.677 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:01.677 12:09:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:02.244 12:09:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:02.503 [2024-07-21 12:09:01.224291] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:02.503 [2024-07-21 12:09:01.224464] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.503 [2024-07-21 12:09:01.224730] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.503 [2024-07-21 12:09:01.224975] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.503 [2024-07-21 12:09:01.225114] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:27:02.503 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:02.503 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:02.762 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:27:03.021 /dev/nbd0 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:03.021 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.021 1+0 records in 00:27:03.022 1+0 records out 00:27:03.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378628 s, 10.8 MB/s 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.022 12:09:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:27:03.281 /dev/nbd1 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:03.281 1+0 records in 00:27:03.281 1+0 records out 00:27:03.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102142 s, 4.0 MB/s 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:03.281 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:03.543 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:27:03.543 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:03.543 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:03.543 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:03.543 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:27:03.543 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:03.543 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:03.800 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:03.801 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:04.058 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:04.319 12:09:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:04.587 [2024-07-21 12:09:03.262183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:04.587 [2024-07-21 12:09:03.262662] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:04.587 [2024-07-21 12:09:03.262847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:04.587 [2024-07-21 12:09:03.263020] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:04.587 [2024-07-21 12:09:03.266097] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:04.587 [2024-07-21 12:09:03.266332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:04.587 [2024-07-21 12:09:03.266544] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:04.587 [2024-07-21 12:09:03.266686] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:04.587 [2024-07-21 12:09:03.266983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:04.587 spare 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:04.587 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:04.587 [2024-07-21 12:09:03.367305] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:27:04.587 [2024-07-21 12:09:03.367518] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:04.587 [2024-07-21 12:09:03.367761] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1940 00:27:04.587 [2024-07-21 12:09:03.368619] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:27:04.587 [2024-07-21 12:09:03.368749] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:27:04.587 [2024-07-21 12:09:03.369071] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:04.844 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:04.844 "name": "raid_bdev1", 00:27:04.844 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:04.844 "strip_size_kb": 0, 00:27:04.845 "state": "online", 00:27:04.845 "raid_level": "raid1", 00:27:04.845 "superblock": true, 00:27:04.845 "num_base_bdevs": 2, 00:27:04.845 "num_base_bdevs_discovered": 2, 00:27:04.845 "num_base_bdevs_operational": 2, 00:27:04.845 "base_bdevs_list": [ 00:27:04.845 { 00:27:04.845 "name": "spare", 00:27:04.845 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:27:04.845 "is_configured": true, 00:27:04.845 "data_offset": 2048, 00:27:04.845 "data_size": 63488 00:27:04.845 }, 00:27:04.845 { 00:27:04.845 "name": "BaseBdev2", 00:27:04.845 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:04.845 "is_configured": true, 00:27:04.845 "data_offset": 2048, 00:27:04.845 "data_size": 63488 00:27:04.845 } 00:27:04.845 ] 00:27:04.845 }' 00:27:04.845 12:09:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:04.845 12:09:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.410 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:05.410 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:05.410 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:05.410 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:05.410 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:05.410 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.410 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:05.667 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:05.667 "name": "raid_bdev1", 00:27:05.667 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:05.667 "strip_size_kb": 0, 00:27:05.667 "state": "online", 00:27:05.667 "raid_level": "raid1", 00:27:05.667 "superblock": true, 00:27:05.667 "num_base_bdevs": 2, 00:27:05.667 "num_base_bdevs_discovered": 2, 00:27:05.667 "num_base_bdevs_operational": 2, 00:27:05.667 "base_bdevs_list": [ 00:27:05.667 { 00:27:05.667 "name": "spare", 00:27:05.667 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:27:05.667 "is_configured": true, 00:27:05.667 "data_offset": 2048, 00:27:05.667 "data_size": 63488 00:27:05.667 }, 00:27:05.667 { 00:27:05.667 "name": "BaseBdev2", 00:27:05.667 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:05.667 "is_configured": true, 00:27:05.667 "data_offset": 2048, 00:27:05.667 "data_size": 63488 00:27:05.667 } 00:27:05.667 ] 00:27:05.667 }' 00:27:05.667 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:05.667 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:05.667 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:05.667 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:05.667 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:05.667 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:05.953 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:05.953 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:06.211 [2024-07-21 12:09:04.875150] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:06.211 12:09:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:06.468 12:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:06.468 "name": "raid_bdev1", 00:27:06.468 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:06.468 "strip_size_kb": 0, 00:27:06.468 "state": "online", 00:27:06.468 "raid_level": "raid1", 00:27:06.468 "superblock": true, 00:27:06.468 "num_base_bdevs": 2, 00:27:06.468 "num_base_bdevs_discovered": 1, 00:27:06.468 "num_base_bdevs_operational": 1, 00:27:06.468 "base_bdevs_list": [ 00:27:06.468 { 00:27:06.468 "name": null, 00:27:06.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:06.468 "is_configured": false, 00:27:06.468 "data_offset": 2048, 00:27:06.468 "data_size": 63488 00:27:06.468 }, 00:27:06.468 { 00:27:06.468 "name": "BaseBdev2", 00:27:06.468 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:06.468 "is_configured": true, 00:27:06.468 "data_offset": 2048, 00:27:06.468 "data_size": 63488 00:27:06.468 } 00:27:06.468 ] 00:27:06.468 }' 00:27:06.468 12:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:06.468 12:09:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.034 12:09:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:07.292 [2024-07-21 12:09:06.055419] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:07.292 [2024-07-21 12:09:06.055854] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:07.292 [2024-07-21 12:09:06.056015] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:07.292 [2024-07-21 12:09:06.056129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:07.292 [2024-07-21 12:09:06.063071] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:27:07.292 [2024-07-21 12:09:06.065453] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:07.292 12:09:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:08.229 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:08.229 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:08.229 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:08.229 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:08.229 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:08.229 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:08.229 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:08.488 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:08.488 "name": "raid_bdev1", 00:27:08.488 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:08.488 "strip_size_kb": 0, 00:27:08.488 "state": "online", 00:27:08.488 "raid_level": "raid1", 00:27:08.488 "superblock": true, 00:27:08.488 "num_base_bdevs": 2, 00:27:08.488 "num_base_bdevs_discovered": 2, 00:27:08.488 "num_base_bdevs_operational": 2, 00:27:08.488 "process": { 00:27:08.488 "type": "rebuild", 00:27:08.488 "target": "spare", 00:27:08.488 "progress": { 00:27:08.488 "blocks": 24576, 00:27:08.488 "percent": 38 00:27:08.488 } 00:27:08.488 }, 00:27:08.488 "base_bdevs_list": [ 00:27:08.488 { 00:27:08.488 "name": "spare", 00:27:08.488 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:27:08.488 "is_configured": true, 00:27:08.488 "data_offset": 2048, 00:27:08.488 "data_size": 63488 00:27:08.488 }, 00:27:08.488 { 00:27:08.488 "name": "BaseBdev2", 00:27:08.489 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:08.489 "is_configured": true, 00:27:08.489 "data_offset": 2048, 00:27:08.489 "data_size": 63488 00:27:08.489 } 00:27:08.489 ] 00:27:08.489 }' 00:27:08.489 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:08.748 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:08.748 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:08.748 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:08.748 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:09.007 [2024-07-21 12:09:07.647633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:09.007 [2024-07-21 12:09:07.676171] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:09.007 [2024-07-21 12:09:07.676425] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:09.007 [2024-07-21 12:09:07.676589] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:09.007 [2024-07-21 12:09:07.676638] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:09.007 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:09.267 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:09.267 "name": "raid_bdev1", 00:27:09.267 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:09.267 "strip_size_kb": 0, 00:27:09.267 "state": "online", 00:27:09.267 "raid_level": "raid1", 00:27:09.267 "superblock": true, 00:27:09.267 "num_base_bdevs": 2, 00:27:09.267 "num_base_bdevs_discovered": 1, 00:27:09.267 "num_base_bdevs_operational": 1, 00:27:09.267 "base_bdevs_list": [ 00:27:09.267 { 00:27:09.267 "name": null, 00:27:09.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:09.267 "is_configured": false, 00:27:09.267 "data_offset": 2048, 00:27:09.267 "data_size": 63488 00:27:09.267 }, 00:27:09.267 { 00:27:09.267 "name": "BaseBdev2", 00:27:09.267 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:09.267 "is_configured": true, 00:27:09.267 "data_offset": 2048, 00:27:09.267 "data_size": 63488 00:27:09.267 } 00:27:09.267 ] 00:27:09.267 }' 00:27:09.267 12:09:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:09.267 12:09:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:09.834 12:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:10.093 [2024-07-21 12:09:08.915649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:10.093 [2024-07-21 12:09:08.917019] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.093 [2024-07-21 12:09:08.917417] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:27:10.093 [2024-07-21 12:09:08.917703] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.093 [2024-07-21 12:09:08.919035] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.093 [2024-07-21 12:09:08.919386] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:10.093 [2024-07-21 12:09:08.919987] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:10.093 spare 00:27:10.093 [2024-07-21 12:09:08.921734] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:10.093 [2024-07-21 12:09:08.922009] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:10.093 [2024-07-21 12:09:08.922384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:10.093 [2024-07-21 12:09:08.929486] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:27:10.093 [2024-07-21 12:09:08.932405] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:10.093 12:09:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:27:11.470 12:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:11.470 12:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:11.470 12:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:11.471 12:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:11.471 12:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:11.471 12:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.471 12:09:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.471 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.471 "name": "raid_bdev1", 00:27:11.471 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:11.471 "strip_size_kb": 0, 00:27:11.471 "state": "online", 00:27:11.471 "raid_level": "raid1", 00:27:11.471 "superblock": true, 00:27:11.471 "num_base_bdevs": 2, 00:27:11.471 "num_base_bdevs_discovered": 2, 00:27:11.471 "num_base_bdevs_operational": 2, 00:27:11.471 "process": { 00:27:11.471 "type": "rebuild", 00:27:11.471 "target": "spare", 00:27:11.471 "progress": { 00:27:11.471 "blocks": 24576, 00:27:11.471 "percent": 38 00:27:11.471 } 00:27:11.471 }, 00:27:11.471 "base_bdevs_list": [ 00:27:11.471 { 00:27:11.471 "name": "spare", 00:27:11.471 "uuid": "0b2f6238-7bec-560a-83a2-5a62cfc7ea31", 00:27:11.471 "is_configured": true, 00:27:11.471 "data_offset": 2048, 00:27:11.471 "data_size": 63488 00:27:11.471 }, 00:27:11.471 { 00:27:11.471 "name": "BaseBdev2", 00:27:11.471 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:11.471 "is_configured": true, 00:27:11.471 "data_offset": 2048, 00:27:11.471 "data_size": 63488 00:27:11.471 } 00:27:11.471 ] 00:27:11.471 }' 00:27:11.471 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:11.471 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:11.471 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:11.471 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:11.471 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:11.728 [2024-07-21 12:09:10.590646] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:11.986 [2024-07-21 12:09:10.644650] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:11.986 [2024-07-21 12:09:10.644989] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.986 [2024-07-21 12:09:10.645173] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:11.986 [2024-07-21 12:09:10.645302] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:11.986 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.244 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:12.244 "name": "raid_bdev1", 00:27:12.244 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:12.245 "strip_size_kb": 0, 00:27:12.245 "state": "online", 00:27:12.245 "raid_level": "raid1", 00:27:12.245 "superblock": true, 00:27:12.245 "num_base_bdevs": 2, 00:27:12.245 "num_base_bdevs_discovered": 1, 00:27:12.245 "num_base_bdevs_operational": 1, 00:27:12.245 "base_bdevs_list": [ 00:27:12.245 { 00:27:12.245 "name": null, 00:27:12.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.245 "is_configured": false, 00:27:12.245 "data_offset": 2048, 00:27:12.245 "data_size": 63488 00:27:12.245 }, 00:27:12.245 { 00:27:12.245 "name": "BaseBdev2", 00:27:12.245 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:12.245 "is_configured": true, 00:27:12.245 "data_offset": 2048, 00:27:12.245 "data_size": 63488 00:27:12.245 } 00:27:12.245 ] 00:27:12.245 }' 00:27:12.245 12:09:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:12.245 12:09:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:12.811 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:12.811 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:12.811 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:12.811 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:12.811 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:12.811 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:12.811 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.069 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:13.069 "name": "raid_bdev1", 00:27:13.069 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:13.069 "strip_size_kb": 0, 00:27:13.069 "state": "online", 00:27:13.069 "raid_level": "raid1", 00:27:13.069 "superblock": true, 00:27:13.069 "num_base_bdevs": 2, 00:27:13.069 "num_base_bdevs_discovered": 1, 00:27:13.069 "num_base_bdevs_operational": 1, 00:27:13.069 "base_bdevs_list": [ 00:27:13.069 { 00:27:13.069 "name": null, 00:27:13.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.070 "is_configured": false, 00:27:13.070 "data_offset": 2048, 00:27:13.070 "data_size": 63488 00:27:13.070 }, 00:27:13.070 { 00:27:13.070 "name": "BaseBdev2", 00:27:13.070 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:13.070 "is_configured": true, 00:27:13.070 "data_offset": 2048, 00:27:13.070 "data_size": 63488 00:27:13.070 } 00:27:13.070 ] 00:27:13.070 }' 00:27:13.070 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:13.070 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:13.070 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:13.327 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:13.327 12:09:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:27:13.585 12:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:13.844 [2024-07-21 12:09:12.501544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:13.844 [2024-07-21 12:09:12.501955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.844 [2024-07-21 12:09:12.502072] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:27:13.844 [2024-07-21 12:09:12.502364] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.844 [2024-07-21 12:09:12.503145] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.844 [2024-07-21 12:09:12.503325] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:13.844 [2024-07-21 12:09:12.503544] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:27:13.844 [2024-07-21 12:09:12.503673] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:13.844 [2024-07-21 12:09:12.503804] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:13.844 BaseBdev1 00:27:13.844 12:09:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:14.844 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.114 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:15.114 "name": "raid_bdev1", 00:27:15.114 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:15.114 "strip_size_kb": 0, 00:27:15.114 "state": "online", 00:27:15.114 "raid_level": "raid1", 00:27:15.114 "superblock": true, 00:27:15.114 "num_base_bdevs": 2, 00:27:15.114 "num_base_bdevs_discovered": 1, 00:27:15.114 "num_base_bdevs_operational": 1, 00:27:15.114 "base_bdevs_list": [ 00:27:15.114 { 00:27:15.114 "name": null, 00:27:15.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.114 "is_configured": false, 00:27:15.114 "data_offset": 2048, 00:27:15.114 "data_size": 63488 00:27:15.114 }, 00:27:15.114 { 00:27:15.114 "name": "BaseBdev2", 00:27:15.114 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:15.114 "is_configured": true, 00:27:15.114 "data_offset": 2048, 00:27:15.114 "data_size": 63488 00:27:15.114 } 00:27:15.114 ] 00:27:15.114 }' 00:27:15.114 12:09:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:15.114 12:09:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:15.678 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:15.678 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:15.678 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:15.678 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:15.678 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:15.678 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:15.678 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:15.944 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:15.944 "name": "raid_bdev1", 00:27:15.944 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:15.944 "strip_size_kb": 0, 00:27:15.944 "state": "online", 00:27:15.944 "raid_level": "raid1", 00:27:15.944 "superblock": true, 00:27:15.944 "num_base_bdevs": 2, 00:27:15.944 "num_base_bdevs_discovered": 1, 00:27:15.944 "num_base_bdevs_operational": 1, 00:27:15.944 "base_bdevs_list": [ 00:27:15.944 { 00:27:15.944 "name": null, 00:27:15.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:15.944 "is_configured": false, 00:27:15.944 "data_offset": 2048, 00:27:15.944 "data_size": 63488 00:27:15.944 }, 00:27:15.944 { 00:27:15.944 "name": "BaseBdev2", 00:27:15.944 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:15.944 "is_configured": true, 00:27:15.944 "data_offset": 2048, 00:27:15.944 "data_size": 63488 00:27:15.944 } 00:27:15.944 ] 00:27:15.944 }' 00:27:15.944 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:15.944 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:15.944 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:27:16.202 12:09:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:27:16.460 [2024-07-21 12:09:15.138086] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.460 [2024-07-21 12:09:15.138330] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:27:16.460 [2024-07-21 12:09:15.138349] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:27:16.460 request: 00:27:16.460 { 00:27:16.460 "raid_bdev": "raid_bdev1", 00:27:16.460 "base_bdev": "BaseBdev1", 00:27:16.460 "method": "bdev_raid_add_base_bdev", 00:27:16.460 "req_id": 1 00:27:16.460 } 00:27:16.460 Got JSON-RPC error response 00:27:16.460 response: 00:27:16.460 { 00:27:16.460 "code": -22, 00:27:16.460 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:27:16.460 } 00:27:16.460 12:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:27:16.460 12:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:16.460 12:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:16.460 12:09:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:16.460 12:09:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:17.394 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.656 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:17.657 "name": "raid_bdev1", 00:27:17.657 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:17.657 "strip_size_kb": 0, 00:27:17.657 "state": "online", 00:27:17.657 "raid_level": "raid1", 00:27:17.657 "superblock": true, 00:27:17.657 "num_base_bdevs": 2, 00:27:17.657 "num_base_bdevs_discovered": 1, 00:27:17.657 "num_base_bdevs_operational": 1, 00:27:17.657 "base_bdevs_list": [ 00:27:17.657 { 00:27:17.657 "name": null, 00:27:17.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:17.657 "is_configured": false, 00:27:17.657 "data_offset": 2048, 00:27:17.657 "data_size": 63488 00:27:17.657 }, 00:27:17.657 { 00:27:17.657 "name": "BaseBdev2", 00:27:17.657 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:17.657 "is_configured": true, 00:27:17.657 "data_offset": 2048, 00:27:17.657 "data_size": 63488 00:27:17.657 } 00:27:17.657 ] 00:27:17.657 }' 00:27:17.657 12:09:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:17.657 12:09:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:18.593 "name": "raid_bdev1", 00:27:18.593 "uuid": "857a4f51-ad19-4969-85f6-72ceb6e09cfb", 00:27:18.593 "strip_size_kb": 0, 00:27:18.593 "state": "online", 00:27:18.593 "raid_level": "raid1", 00:27:18.593 "superblock": true, 00:27:18.593 "num_base_bdevs": 2, 00:27:18.593 "num_base_bdevs_discovered": 1, 00:27:18.593 "num_base_bdevs_operational": 1, 00:27:18.593 "base_bdevs_list": [ 00:27:18.593 { 00:27:18.593 "name": null, 00:27:18.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:18.593 "is_configured": false, 00:27:18.593 "data_offset": 2048, 00:27:18.593 "data_size": 63488 00:27:18.593 }, 00:27:18.593 { 00:27:18.593 "name": "BaseBdev2", 00:27:18.593 "uuid": "0f3f4cb9-6a0a-50c1-a883-02325c31d1cc", 00:27:18.593 "is_configured": true, 00:27:18.593 "data_offset": 2048, 00:27:18.593 "data_size": 63488 00:27:18.593 } 00:27:18.593 ] 00:27:18.593 }' 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 154992 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 154992 ']' 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 154992 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 154992 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 154992' 00:27:18.593 killing process with pid 154992 00:27:18.593 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 154992 00:27:18.593 Received shutdown signal, test time was about 60.000000 seconds 00:27:18.593 00:27:18.593 Latency(us) 00:27:18.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.594 =================================================================================================================== 00:27:18.594 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:18.594 [2024-07-21 12:09:17.459898] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:18.852 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 154992 00:27:18.852 [2024-07-21 12:09:17.460067] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.852 [2024-07-21 12:09:17.460131] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.852 [2024-07-21 12:09:17.460145] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:27:18.852 [2024-07-21 12:09:17.491524] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.110 ************************************ 00:27:19.110 END TEST raid_rebuild_test_sb 00:27:19.110 ************************************ 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:27:19.110 00:27:19.110 real 0m37.891s 00:27:19.110 user 0m56.954s 00:27:19.110 sys 0m5.778s 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:19.110 12:09:17 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:27:19.110 12:09:17 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:27:19.110 12:09:17 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:19.110 12:09:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.110 ************************************ 00:27:19.110 START TEST raid_rebuild_test_io 00:27:19.110 ************************************ 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 false true true 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=155948 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 155948 /var/tmp/spdk-raid.sock 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 155948 ']' 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:19.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:19.110 12:09:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:19.110 [2024-07-21 12:09:17.882238] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:19.110 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:19.110 Zero copy mechanism will not be used. 00:27:19.110 [2024-07-21 12:09:17.882478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155948 ] 00:27:19.368 [2024-07-21 12:09:18.048856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.368 [2024-07-21 12:09:18.139738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.368 [2024-07-21 12:09:18.195161] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.301 12:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:20.301 12:09:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:27:20.301 12:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:20.301 12:09:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:20.301 BaseBdev1_malloc 00:27:20.559 12:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:20.816 [2024-07-21 12:09:19.425875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:20.816 [2024-07-21 12:09:19.426059] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.816 [2024-07-21 12:09:19.426123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:20.816 [2024-07-21 12:09:19.426183] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.816 [2024-07-21 12:09:19.429075] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.816 [2024-07-21 12:09:19.429150] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:20.816 BaseBdev1 00:27:20.816 12:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:20.816 12:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:20.816 BaseBdev2_malloc 00:27:20.816 12:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:21.074 [2024-07-21 12:09:19.888989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:21.074 [2024-07-21 12:09:19.889120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.074 [2024-07-21 12:09:19.889195] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:21.074 [2024-07-21 12:09:19.889240] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.074 [2024-07-21 12:09:19.891785] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.074 [2024-07-21 12:09:19.891852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:21.074 BaseBdev2 00:27:21.074 12:09:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:21.332 spare_malloc 00:27:21.590 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:21.590 spare_delay 00:27:21.590 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:21.848 [2024-07-21 12:09:20.641598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:21.848 [2024-07-21 12:09:20.641740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:21.848 [2024-07-21 12:09:20.641797] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:27:21.848 [2024-07-21 12:09:20.641855] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:21.848 [2024-07-21 12:09:20.644499] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:21.848 [2024-07-21 12:09:20.644603] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:21.848 spare 00:27:21.848 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:22.106 [2024-07-21 12:09:20.865728] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:22.106 [2024-07-21 12:09:20.867994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:22.106 [2024-07-21 12:09:20.868142] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:27:22.106 [2024-07-21 12:09:20.868157] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:22.106 [2024-07-21 12:09:20.868384] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:27:22.106 [2024-07-21 12:09:20.868869] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:27:22.106 [2024-07-21 12:09:20.868898] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:27:22.106 [2024-07-21 12:09:20.869120] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:22.106 12:09:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.364 12:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:22.364 "name": "raid_bdev1", 00:27:22.364 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:22.364 "strip_size_kb": 0, 00:27:22.364 "state": "online", 00:27:22.364 "raid_level": "raid1", 00:27:22.364 "superblock": false, 00:27:22.364 "num_base_bdevs": 2, 00:27:22.364 "num_base_bdevs_discovered": 2, 00:27:22.364 "num_base_bdevs_operational": 2, 00:27:22.364 "base_bdevs_list": [ 00:27:22.364 { 00:27:22.364 "name": "BaseBdev1", 00:27:22.364 "uuid": "ca593ef6-838d-5e9d-b13c-31f1624efbe8", 00:27:22.364 "is_configured": true, 00:27:22.364 "data_offset": 0, 00:27:22.364 "data_size": 65536 00:27:22.364 }, 00:27:22.364 { 00:27:22.364 "name": "BaseBdev2", 00:27:22.364 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:22.364 "is_configured": true, 00:27:22.364 "data_offset": 0, 00:27:22.364 "data_size": 65536 00:27:22.364 } 00:27:22.364 ] 00:27:22.364 }' 00:27:22.364 12:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:22.364 12:09:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:22.930 12:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:22.930 12:09:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:23.189 [2024-07-21 12:09:22.026166] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:23.189 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:27:23.189 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:23.189 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.447 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:27:23.447 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:23.704 [2024-07-21 12:09:22.400686] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:27:23.704 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:23.704 Zero copy mechanism will not be used. 00:27:23.704 Running I/O for 60 seconds... 00:27:23.704 [2024-07-21 12:09:22.513969] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:23.704 [2024-07-21 12:09:22.527475] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:23.704 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:23.961 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:23.961 "name": "raid_bdev1", 00:27:23.961 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:23.961 "strip_size_kb": 0, 00:27:23.961 "state": "online", 00:27:23.961 "raid_level": "raid1", 00:27:23.961 "superblock": false, 00:27:23.961 "num_base_bdevs": 2, 00:27:23.961 "num_base_bdevs_discovered": 1, 00:27:23.961 "num_base_bdevs_operational": 1, 00:27:23.961 "base_bdevs_list": [ 00:27:23.961 { 00:27:23.961 "name": null, 00:27:23.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:23.961 "is_configured": false, 00:27:23.961 "data_offset": 0, 00:27:23.961 "data_size": 65536 00:27:23.961 }, 00:27:23.961 { 00:27:23.961 "name": "BaseBdev2", 00:27:23.961 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:23.961 "is_configured": true, 00:27:23.961 "data_offset": 0, 00:27:23.961 "data_size": 65536 00:27:23.961 } 00:27:23.961 ] 00:27:23.961 }' 00:27:23.961 12:09:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:23.961 12:09:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:24.890 12:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:24.891 [2024-07-21 12:09:23.625885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:24.891 12:09:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:24.891 [2024-07-21 12:09:23.668166] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:24.891 [2024-07-21 12:09:23.670415] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:25.147 [2024-07-21 12:09:23.779280] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:25.147 [2024-07-21 12:09:23.779926] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:25.404 [2024-07-21 12:09:24.022271] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:25.660 [2024-07-21 12:09:24.276345] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:25.660 [2024-07-21 12:09:24.277024] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:25.660 [2024-07-21 12:09:24.508093] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:25.917 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:25.917 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:25.917 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:25.917 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:25.917 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:25.917 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:25.917 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:25.917 [2024-07-21 12:09:24.744048] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:26.174 [2024-07-21 12:09:24.858573] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:26.174 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:26.174 "name": "raid_bdev1", 00:27:26.174 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:26.174 "strip_size_kb": 0, 00:27:26.174 "state": "online", 00:27:26.174 "raid_level": "raid1", 00:27:26.174 "superblock": false, 00:27:26.174 "num_base_bdevs": 2, 00:27:26.174 "num_base_bdevs_discovered": 2, 00:27:26.174 "num_base_bdevs_operational": 2, 00:27:26.174 "process": { 00:27:26.174 "type": "rebuild", 00:27:26.174 "target": "spare", 00:27:26.174 "progress": { 00:27:26.174 "blocks": 16384, 00:27:26.174 "percent": 25 00:27:26.174 } 00:27:26.174 }, 00:27:26.174 "base_bdevs_list": [ 00:27:26.174 { 00:27:26.174 "name": "spare", 00:27:26.174 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:26.174 "is_configured": true, 00:27:26.174 "data_offset": 0, 00:27:26.174 "data_size": 65536 00:27:26.174 }, 00:27:26.174 { 00:27:26.174 "name": "BaseBdev2", 00:27:26.174 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:26.174 "is_configured": true, 00:27:26.174 "data_offset": 0, 00:27:26.174 "data_size": 65536 00:27:26.174 } 00:27:26.174 ] 00:27:26.174 }' 00:27:26.174 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:26.174 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:26.174 12:09:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:26.174 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:26.174 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:26.431 [2024-07-21 12:09:25.081486] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:26.431 [2024-07-21 12:09:25.082187] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:26.431 [2024-07-21 12:09:25.249678] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.688 [2024-07-21 12:09:25.300543] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:26.688 [2024-07-21 12:09:25.401988] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:26.688 [2024-07-21 12:09:25.418458] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:26.688 [2024-07-21 12:09:25.418514] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:26.688 [2024-07-21 12:09:25.418529] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:26.688 [2024-07-21 12:09:25.447276] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:26.688 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:26.945 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:26.945 "name": "raid_bdev1", 00:27:26.945 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:26.945 "strip_size_kb": 0, 00:27:26.945 "state": "online", 00:27:26.945 "raid_level": "raid1", 00:27:26.945 "superblock": false, 00:27:26.945 "num_base_bdevs": 2, 00:27:26.945 "num_base_bdevs_discovered": 1, 00:27:26.945 "num_base_bdevs_operational": 1, 00:27:26.945 "base_bdevs_list": [ 00:27:26.945 { 00:27:26.945 "name": null, 00:27:26.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.945 "is_configured": false, 00:27:26.945 "data_offset": 0, 00:27:26.945 "data_size": 65536 00:27:26.945 }, 00:27:26.945 { 00:27:26.945 "name": "BaseBdev2", 00:27:26.945 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:26.945 "is_configured": true, 00:27:26.945 "data_offset": 0, 00:27:26.945 "data_size": 65536 00:27:26.945 } 00:27:26.945 ] 00:27:26.945 }' 00:27:26.945 12:09:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:26.945 12:09:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:27.510 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:27.510 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:27.510 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:27.510 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:27.510 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:27.510 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:27.510 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:28.076 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:28.076 "name": "raid_bdev1", 00:27:28.076 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:28.076 "strip_size_kb": 0, 00:27:28.076 "state": "online", 00:27:28.076 "raid_level": "raid1", 00:27:28.076 "superblock": false, 00:27:28.076 "num_base_bdevs": 2, 00:27:28.076 "num_base_bdevs_discovered": 1, 00:27:28.076 "num_base_bdevs_operational": 1, 00:27:28.076 "base_bdevs_list": [ 00:27:28.076 { 00:27:28.076 "name": null, 00:27:28.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.076 "is_configured": false, 00:27:28.076 "data_offset": 0, 00:27:28.076 "data_size": 65536 00:27:28.076 }, 00:27:28.076 { 00:27:28.076 "name": "BaseBdev2", 00:27:28.076 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:28.076 "is_configured": true, 00:27:28.076 "data_offset": 0, 00:27:28.076 "data_size": 65536 00:27:28.076 } 00:27:28.076 ] 00:27:28.076 }' 00:27:28.076 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:28.076 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:28.076 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:28.076 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:28.076 12:09:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:28.334 [2024-07-21 12:09:27.028318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:28.334 12:09:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:28.334 [2024-07-21 12:09:27.080393] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:28.334 [2024-07-21 12:09:27.082636] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:28.334 [2024-07-21 12:09:27.200455] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:28.334 [2024-07-21 12:09:27.201169] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:28.602 [2024-07-21 12:09:27.411886] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:28.602 [2024-07-21 12:09:27.412243] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:29.173 [2024-07-21 12:09:27.748658] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:29.173 [2024-07-21 12:09:27.873467] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:29.430 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.430 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.430 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.430 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.430 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.430 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.430 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.430 [2024-07-21 12:09:28.248776] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:29.688 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.688 "name": "raid_bdev1", 00:27:29.688 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:29.688 "strip_size_kb": 0, 00:27:29.688 "state": "online", 00:27:29.688 "raid_level": "raid1", 00:27:29.688 "superblock": false, 00:27:29.688 "num_base_bdevs": 2, 00:27:29.688 "num_base_bdevs_discovered": 2, 00:27:29.688 "num_base_bdevs_operational": 2, 00:27:29.688 "process": { 00:27:29.688 "type": "rebuild", 00:27:29.688 "target": "spare", 00:27:29.688 "progress": { 00:27:29.688 "blocks": 14336, 00:27:29.688 "percent": 21 00:27:29.688 } 00:27:29.688 }, 00:27:29.688 "base_bdevs_list": [ 00:27:29.688 { 00:27:29.688 "name": "spare", 00:27:29.688 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:29.688 "is_configured": true, 00:27:29.688 "data_offset": 0, 00:27:29.688 "data_size": 65536 00:27:29.688 }, 00:27:29.688 { 00:27:29.689 "name": "BaseBdev2", 00:27:29.689 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:29.689 "is_configured": true, 00:27:29.689 "data_offset": 0, 00:27:29.689 "data_size": 65536 00:27:29.689 } 00:27:29.689 ] 00:27:29.689 }' 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=853 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:29.689 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:29.689 [2024-07-21 12:09:28.464082] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:29.947 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:29.947 "name": "raid_bdev1", 00:27:29.947 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:29.947 "strip_size_kb": 0, 00:27:29.947 "state": "online", 00:27:29.947 "raid_level": "raid1", 00:27:29.947 "superblock": false, 00:27:29.947 "num_base_bdevs": 2, 00:27:29.947 "num_base_bdevs_discovered": 2, 00:27:29.947 "num_base_bdevs_operational": 2, 00:27:29.947 "process": { 00:27:29.947 "type": "rebuild", 00:27:29.947 "target": "spare", 00:27:29.947 "progress": { 00:27:29.947 "blocks": 18432, 00:27:29.947 "percent": 28 00:27:29.947 } 00:27:29.947 }, 00:27:29.947 "base_bdevs_list": [ 00:27:29.947 { 00:27:29.947 "name": "spare", 00:27:29.947 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:29.947 "is_configured": true, 00:27:29.947 "data_offset": 0, 00:27:29.947 "data_size": 65536 00:27:29.947 }, 00:27:29.947 { 00:27:29.947 "name": "BaseBdev2", 00:27:29.947 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:29.947 "is_configured": true, 00:27:29.947 "data_offset": 0, 00:27:29.947 "data_size": 65536 00:27:29.947 } 00:27:29.947 ] 00:27:29.947 }' 00:27:29.947 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:29.947 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:29.947 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:29.947 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:29.947 12:09:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:30.204 [2024-07-21 12:09:28.978863] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:30.204 [2024-07-21 12:09:28.979558] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:30.770 [2024-07-21 12:09:29.550373] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:31.077 12:09:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:31.077 [2024-07-21 12:09:29.868816] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:27:31.339 [2024-07-21 12:09:29.977899] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:31.339 [2024-07-21 12:09:29.978267] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:27:31.339 12:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:31.339 "name": "raid_bdev1", 00:27:31.339 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:31.339 "strip_size_kb": 0, 00:27:31.339 "state": "online", 00:27:31.339 "raid_level": "raid1", 00:27:31.339 "superblock": false, 00:27:31.339 "num_base_bdevs": 2, 00:27:31.339 "num_base_bdevs_discovered": 2, 00:27:31.339 "num_base_bdevs_operational": 2, 00:27:31.339 "process": { 00:27:31.339 "type": "rebuild", 00:27:31.339 "target": "spare", 00:27:31.339 "progress": { 00:27:31.339 "blocks": 40960, 00:27:31.339 "percent": 62 00:27:31.339 } 00:27:31.339 }, 00:27:31.339 "base_bdevs_list": [ 00:27:31.339 { 00:27:31.339 "name": "spare", 00:27:31.339 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:31.339 "is_configured": true, 00:27:31.339 "data_offset": 0, 00:27:31.339 "data_size": 65536 00:27:31.339 }, 00:27:31.339 { 00:27:31.339 "name": "BaseBdev2", 00:27:31.339 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:31.339 "is_configured": true, 00:27:31.339 "data_offset": 0, 00:27:31.339 "data_size": 65536 00:27:31.339 } 00:27:31.339 ] 00:27:31.339 }' 00:27:31.339 12:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:31.339 12:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:31.339 12:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:31.339 12:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:31.339 12:09:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:31.599 [2024-07-21 12:09:30.414890] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:27:31.858 [2024-07-21 12:09:30.623342] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:27:32.117 [2024-07-21 12:09:30.855740] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:27:32.375 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:32.376 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:32.376 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:32.376 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:32.376 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:32.376 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:32.376 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:32.376 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:32.376 [2024-07-21 12:09:31.182370] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:27:32.635 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:32.635 "name": "raid_bdev1", 00:27:32.635 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:32.635 "strip_size_kb": 0, 00:27:32.635 "state": "online", 00:27:32.635 "raid_level": "raid1", 00:27:32.635 "superblock": false, 00:27:32.635 "num_base_bdevs": 2, 00:27:32.635 "num_base_bdevs_discovered": 2, 00:27:32.635 "num_base_bdevs_operational": 2, 00:27:32.635 "process": { 00:27:32.635 "type": "rebuild", 00:27:32.635 "target": "spare", 00:27:32.635 "progress": { 00:27:32.635 "blocks": 57344, 00:27:32.635 "percent": 87 00:27:32.635 } 00:27:32.635 }, 00:27:32.635 "base_bdevs_list": [ 00:27:32.635 { 00:27:32.635 "name": "spare", 00:27:32.635 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:32.635 "is_configured": true, 00:27:32.635 "data_offset": 0, 00:27:32.635 "data_size": 65536 00:27:32.635 }, 00:27:32.635 { 00:27:32.635 "name": "BaseBdev2", 00:27:32.635 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:32.635 "is_configured": true, 00:27:32.635 "data_offset": 0, 00:27:32.635 "data_size": 65536 00:27:32.635 } 00:27:32.635 ] 00:27:32.635 }' 00:27:32.635 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:32.635 [2024-07-21 12:09:31.398970] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:32.635 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:32.635 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:32.635 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:32.635 12:09:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:33.203 [2024-07-21 12:09:31.825465] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:33.203 [2024-07-21 12:09:31.925440] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:33.203 [2024-07-21 12:09:31.927967] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:33.769 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.027 "name": "raid_bdev1", 00:27:34.027 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:34.027 "strip_size_kb": 0, 00:27:34.027 "state": "online", 00:27:34.027 "raid_level": "raid1", 00:27:34.027 "superblock": false, 00:27:34.027 "num_base_bdevs": 2, 00:27:34.027 "num_base_bdevs_discovered": 2, 00:27:34.027 "num_base_bdevs_operational": 2, 00:27:34.027 "base_bdevs_list": [ 00:27:34.027 { 00:27:34.027 "name": "spare", 00:27:34.027 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:34.027 "is_configured": true, 00:27:34.027 "data_offset": 0, 00:27:34.027 "data_size": 65536 00:27:34.027 }, 00:27:34.027 { 00:27:34.027 "name": "BaseBdev2", 00:27:34.027 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:34.027 "is_configured": true, 00:27:34.027 "data_offset": 0, 00:27:34.027 "data_size": 65536 00:27:34.027 } 00:27:34.027 ] 00:27:34.027 }' 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.027 12:09:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.300 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:34.300 "name": "raid_bdev1", 00:27:34.300 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:34.300 "strip_size_kb": 0, 00:27:34.300 "state": "online", 00:27:34.300 "raid_level": "raid1", 00:27:34.300 "superblock": false, 00:27:34.300 "num_base_bdevs": 2, 00:27:34.300 "num_base_bdevs_discovered": 2, 00:27:34.300 "num_base_bdevs_operational": 2, 00:27:34.300 "base_bdevs_list": [ 00:27:34.300 { 00:27:34.300 "name": "spare", 00:27:34.300 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:34.300 "is_configured": true, 00:27:34.300 "data_offset": 0, 00:27:34.300 "data_size": 65536 00:27:34.300 }, 00:27:34.300 { 00:27:34.300 "name": "BaseBdev2", 00:27:34.300 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:34.300 "is_configured": true, 00:27:34.300 "data_offset": 0, 00:27:34.300 "data_size": 65536 00:27:34.300 } 00:27:34.300 ] 00:27:34.300 }' 00:27:34.300 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:34.300 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:34.301 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:34.561 "name": "raid_bdev1", 00:27:34.561 "uuid": "133b8cfa-7da2-48b1-890f-c929c7a1fe28", 00:27:34.561 "strip_size_kb": 0, 00:27:34.561 "state": "online", 00:27:34.561 "raid_level": "raid1", 00:27:34.561 "superblock": false, 00:27:34.561 "num_base_bdevs": 2, 00:27:34.561 "num_base_bdevs_discovered": 2, 00:27:34.561 "num_base_bdevs_operational": 2, 00:27:34.561 "base_bdevs_list": [ 00:27:34.561 { 00:27:34.561 "name": "spare", 00:27:34.561 "uuid": "742e1582-8c8c-5c36-9ef0-dc90a13ad793", 00:27:34.561 "is_configured": true, 00:27:34.561 "data_offset": 0, 00:27:34.561 "data_size": 65536 00:27:34.561 }, 00:27:34.561 { 00:27:34.561 "name": "BaseBdev2", 00:27:34.561 "uuid": "5e75c041-3536-5faf-a2c2-f0a91c18b111", 00:27:34.561 "is_configured": true, 00:27:34.561 "data_offset": 0, 00:27:34.561 "data_size": 65536 00:27:34.561 } 00:27:34.561 ] 00:27:34.561 }' 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:34.561 12:09:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:35.496 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:35.496 [2024-07-21 12:09:34.233279] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:35.496 [2024-07-21 12:09:34.233324] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:35.496 00:27:35.496 Latency(us) 00:27:35.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.496 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:35.496 raid_bdev1 : 11.93 108.72 326.15 0.00 0.00 12393.84 297.89 116773.24 00:27:35.496 =================================================================================================================== 00:27:35.496 Total : 108.72 326.15 0.00 0.00 12393.84 297.89 116773.24 00:27:35.496 [2024-07-21 12:09:34.337794] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:35.496 [2024-07-21 12:09:34.337875] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:35.496 [2024-07-21 12:09:34.337986] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:35.496 [2024-07-21 12:09:34.338002] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:27:35.496 0 00:27:35.496 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:35.496 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:35.754 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:36.012 /dev/nbd0 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.012 1+0 records in 00:27:36.012 1+0 records out 00:27:36.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605314 s, 6.8 MB/s 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.012 12:09:34 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:36.278 /dev/nbd1 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:36.538 1+0 records in 00:27:36.538 1+0 records out 00:27:36.538 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632791 s, 6.5 MB/s 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:36.538 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:36.796 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:27:36.797 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:36.797 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 155948 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 155948 ']' 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 155948 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 155948 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 155948' 00:27:37.055 killing process with pid 155948 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 155948 00:27:37.055 Received shutdown signal, test time was about 13.472223 seconds 00:27:37.055 00:27:37.055 Latency(us) 00:27:37.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:37.055 =================================================================================================================== 00:27:37.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:37.055 [2024-07-21 12:09:35.875409] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:37.055 12:09:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 155948 00:27:37.055 [2024-07-21 12:09:35.900707] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:37.313 12:09:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:27:37.313 00:27:37.313 real 0m18.348s 00:27:37.313 user 0m28.906s 00:27:37.313 sys 0m1.989s 00:27:37.313 12:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:37.313 12:09:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.313 ************************************ 00:27:37.313 END TEST raid_rebuild_test_io 00:27:37.313 ************************************ 00:27:37.572 12:09:36 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:27:37.572 12:09:36 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:27:37.572 12:09:36 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:37.572 12:09:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:37.572 ************************************ 00:27:37.572 START TEST raid_rebuild_test_sb_io 00:27:37.572 ************************************ 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true true true 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=156426 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 156426 /var/tmp/spdk-raid.sock 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 156426 ']' 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:37.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:37.572 12:09:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:37.572 [2024-07-21 12:09:36.285208] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:37.572 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:37.572 Zero copy mechanism will not be used. 00:27:37.572 [2024-07-21 12:09:36.285434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156426 ] 00:27:37.830 [2024-07-21 12:09:36.442865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.831 [2024-07-21 12:09:36.531340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.831 [2024-07-21 12:09:36.585572] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:38.397 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:38.397 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:27:38.397 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:38.397 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:38.655 BaseBdev1_malloc 00:27:38.655 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:27:38.913 [2024-07-21 12:09:37.715906] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:27:38.913 [2024-07-21 12:09:37.716244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:38.913 [2024-07-21 12:09:37.716441] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:27:38.913 [2024-07-21 12:09:37.716632] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:38.913 [2024-07-21 12:09:37.719442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:38.913 [2024-07-21 12:09:37.719635] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:38.913 BaseBdev1 00:27:38.913 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:27:38.913 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:39.172 BaseBdev2_malloc 00:27:39.172 12:09:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:27:39.430 [2024-07-21 12:09:38.211201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:27:39.430 [2024-07-21 12:09:38.211472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:39.430 [2024-07-21 12:09:38.211717] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:39.430 [2024-07-21 12:09:38.211879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:39.430 [2024-07-21 12:09:38.214490] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:39.430 [2024-07-21 12:09:38.214730] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:39.430 BaseBdev2 00:27:39.430 12:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:27:39.688 spare_malloc 00:27:39.688 12:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:27:39.946 spare_delay 00:27:39.946 12:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:40.203 [2024-07-21 12:09:38.934408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:40.203 [2024-07-21 12:09:38.934763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:40.203 [2024-07-21 12:09:38.934942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:27:40.203 [2024-07-21 12:09:38.935139] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:40.203 [2024-07-21 12:09:38.938017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:40.203 [2024-07-21 12:09:38.938246] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:40.203 spare 00:27:40.203 12:09:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:27:40.460 [2024-07-21 12:09:39.194764] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.460 [2024-07-21 12:09:39.197131] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:40.460 [2024-07-21 12:09:39.197533] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:27:40.460 [2024-07-21 12:09:39.197664] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:40.460 [2024-07-21 12:09:39.197864] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:27:40.460 [2024-07-21 12:09:39.198437] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:27:40.460 [2024-07-21 12:09:39.198660] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:27:40.460 [2024-07-21 12:09:39.199029] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:40.460 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:40.717 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:40.717 "name": "raid_bdev1", 00:27:40.717 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:40.717 "strip_size_kb": 0, 00:27:40.717 "state": "online", 00:27:40.717 "raid_level": "raid1", 00:27:40.717 "superblock": true, 00:27:40.717 "num_base_bdevs": 2, 00:27:40.717 "num_base_bdevs_discovered": 2, 00:27:40.717 "num_base_bdevs_operational": 2, 00:27:40.717 "base_bdevs_list": [ 00:27:40.717 { 00:27:40.717 "name": "BaseBdev1", 00:27:40.717 "uuid": "ed80e77e-27ab-5c69-95cd-d7e198ca7290", 00:27:40.717 "is_configured": true, 00:27:40.717 "data_offset": 2048, 00:27:40.717 "data_size": 63488 00:27:40.717 }, 00:27:40.717 { 00:27:40.717 "name": "BaseBdev2", 00:27:40.717 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:40.717 "is_configured": true, 00:27:40.717 "data_offset": 2048, 00:27:40.717 "data_size": 63488 00:27:40.717 } 00:27:40.717 ] 00:27:40.717 }' 00:27:40.717 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:40.717 12:09:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:41.295 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:27:41.295 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:27:41.552 [2024-07-21 12:09:40.295589] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:41.552 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:27:41.552 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:41.552 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:27:41.809 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:27:41.809 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:27:41.809 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:27:41.809 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:27:41.809 [2024-07-21 12:09:40.658064] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:27:41.809 I/O size of 3145728 is greater than zero copy threshold (65536). 00:27:41.809 Zero copy mechanism will not be used. 00:27:41.809 Running I/O for 60 seconds... 00:27:42.076 [2024-07-21 12:09:40.778983] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:42.076 [2024-07-21 12:09:40.792216] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:42.076 12:09:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:42.358 12:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:42.358 "name": "raid_bdev1", 00:27:42.358 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:42.358 "strip_size_kb": 0, 00:27:42.358 "state": "online", 00:27:42.358 "raid_level": "raid1", 00:27:42.358 "superblock": true, 00:27:42.358 "num_base_bdevs": 2, 00:27:42.358 "num_base_bdevs_discovered": 1, 00:27:42.358 "num_base_bdevs_operational": 1, 00:27:42.358 "base_bdevs_list": [ 00:27:42.358 { 00:27:42.358 "name": null, 00:27:42.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:42.358 "is_configured": false, 00:27:42.358 "data_offset": 2048, 00:27:42.358 "data_size": 63488 00:27:42.358 }, 00:27:42.358 { 00:27:42.358 "name": "BaseBdev2", 00:27:42.358 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:42.358 "is_configured": true, 00:27:42.358 "data_offset": 2048, 00:27:42.358 "data_size": 63488 00:27:42.358 } 00:27:42.358 ] 00:27:42.358 }' 00:27:42.358 12:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:42.358 12:09:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:42.924 12:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:43.181 [2024-07-21 12:09:41.922117] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:43.181 12:09:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:27:43.181 [2024-07-21 12:09:41.974535] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:27:43.181 [2024-07-21 12:09:41.976994] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:43.440 [2024-07-21 12:09:42.094274] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:43.440 [2024-07-21 12:09:42.094994] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:43.697 [2024-07-21 12:09:42.318807] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:43.697 [2024-07-21 12:09:42.319303] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:43.954 [2024-07-21 12:09:42.651638] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:44.211 [2024-07-21 12:09:42.860290] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:44.211 [2024-07-21 12:09:42.860922] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:44.211 12:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:44.211 12:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:44.211 12:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:44.211 12:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:44.211 12:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:44.211 12:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.211 12:09:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:44.469 [2024-07-21 12:09:43.205055] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:44.469 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.469 "name": "raid_bdev1", 00:27:44.469 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:44.469 "strip_size_kb": 0, 00:27:44.469 "state": "online", 00:27:44.469 "raid_level": "raid1", 00:27:44.469 "superblock": true, 00:27:44.469 "num_base_bdevs": 2, 00:27:44.469 "num_base_bdevs_discovered": 2, 00:27:44.469 "num_base_bdevs_operational": 2, 00:27:44.469 "process": { 00:27:44.469 "type": "rebuild", 00:27:44.469 "target": "spare", 00:27:44.469 "progress": { 00:27:44.469 "blocks": 14336, 00:27:44.469 "percent": 22 00:27:44.469 } 00:27:44.469 }, 00:27:44.469 "base_bdevs_list": [ 00:27:44.469 { 00:27:44.469 "name": "spare", 00:27:44.469 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:44.469 "is_configured": true, 00:27:44.469 "data_offset": 2048, 00:27:44.469 "data_size": 63488 00:27:44.469 }, 00:27:44.469 { 00:27:44.469 "name": "BaseBdev2", 00:27:44.469 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:44.469 "is_configured": true, 00:27:44.469 "data_offset": 2048, 00:27:44.469 "data_size": 63488 00:27:44.469 } 00:27:44.469 ] 00:27:44.469 }' 00:27:44.469 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:44.469 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:44.469 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:44.469 [2024-07-21 12:09:43.329664] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:44.469 [2024-07-21 12:09:43.329934] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:44.727 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:44.727 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:44.727 [2024-07-21 12:09:43.554787] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:44.984 [2024-07-21 12:09:43.657488] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:27:44.984 [2024-07-21 12:09:43.670023] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:44.984 [2024-07-21 12:09:43.670075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:44.984 [2024-07-21 12:09:43.670090] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:27:44.984 [2024-07-21 12:09:43.699247] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ba0 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:44.984 12:09:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:45.242 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:45.242 "name": "raid_bdev1", 00:27:45.242 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:45.242 "strip_size_kb": 0, 00:27:45.242 "state": "online", 00:27:45.242 "raid_level": "raid1", 00:27:45.242 "superblock": true, 00:27:45.242 "num_base_bdevs": 2, 00:27:45.242 "num_base_bdevs_discovered": 1, 00:27:45.242 "num_base_bdevs_operational": 1, 00:27:45.242 "base_bdevs_list": [ 00:27:45.242 { 00:27:45.242 "name": null, 00:27:45.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:45.242 "is_configured": false, 00:27:45.242 "data_offset": 2048, 00:27:45.242 "data_size": 63488 00:27:45.242 }, 00:27:45.242 { 00:27:45.242 "name": "BaseBdev2", 00:27:45.242 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:45.242 "is_configured": true, 00:27:45.242 "data_offset": 2048, 00:27:45.242 "data_size": 63488 00:27:45.242 } 00:27:45.242 ] 00:27:45.242 }' 00:27:45.242 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:45.242 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:45.808 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:45.808 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:45.808 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:45.808 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:45.808 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:45.808 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:45.808 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:46.067 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:46.067 "name": "raid_bdev1", 00:27:46.067 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:46.067 "strip_size_kb": 0, 00:27:46.067 "state": "online", 00:27:46.067 "raid_level": "raid1", 00:27:46.067 "superblock": true, 00:27:46.067 "num_base_bdevs": 2, 00:27:46.067 "num_base_bdevs_discovered": 1, 00:27:46.067 "num_base_bdevs_operational": 1, 00:27:46.067 "base_bdevs_list": [ 00:27:46.067 { 00:27:46.067 "name": null, 00:27:46.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:46.067 "is_configured": false, 00:27:46.067 "data_offset": 2048, 00:27:46.067 "data_size": 63488 00:27:46.067 }, 00:27:46.067 { 00:27:46.067 "name": "BaseBdev2", 00:27:46.067 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:46.067 "is_configured": true, 00:27:46.067 "data_offset": 2048, 00:27:46.067 "data_size": 63488 00:27:46.067 } 00:27:46.067 ] 00:27:46.067 }' 00:27:46.067 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:46.326 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:46.326 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:46.326 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:46.326 12:09:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:46.326 [2024-07-21 12:09:45.192169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:46.584 12:09:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:27:46.584 [2024-07-21 12:09:45.243944] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:46.584 [2024-07-21 12:09:45.246282] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:46.584 [2024-07-21 12:09:45.356473] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:46.584 [2024-07-21 12:09:45.357079] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:27:46.843 [2024-07-21 12:09:45.567262] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:46.843 [2024-07-21 12:09:45.567640] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:27:47.101 [2024-07-21 12:09:45.841951] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:27:47.359 [2024-07-21 12:09:45.985095] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:27:47.359 [2024-07-21 12:09:46.210903] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:27:47.617 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.617 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.617 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.617 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.617 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.617 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.617 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.617 [2024-07-21 12:09:46.421653] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:47.617 [2024-07-21 12:09:46.421939] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.889 "name": "raid_bdev1", 00:27:47.889 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:47.889 "strip_size_kb": 0, 00:27:47.889 "state": "online", 00:27:47.889 "raid_level": "raid1", 00:27:47.889 "superblock": true, 00:27:47.889 "num_base_bdevs": 2, 00:27:47.889 "num_base_bdevs_discovered": 2, 00:27:47.889 "num_base_bdevs_operational": 2, 00:27:47.889 "process": { 00:27:47.889 "type": "rebuild", 00:27:47.889 "target": "spare", 00:27:47.889 "progress": { 00:27:47.889 "blocks": 16384, 00:27:47.889 "percent": 25 00:27:47.889 } 00:27:47.889 }, 00:27:47.889 "base_bdevs_list": [ 00:27:47.889 { 00:27:47.889 "name": "spare", 00:27:47.889 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:47.889 "is_configured": true, 00:27:47.889 "data_offset": 2048, 00:27:47.889 "data_size": 63488 00:27:47.889 }, 00:27:47.889 { 00:27:47.889 "name": "BaseBdev2", 00:27:47.889 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:47.889 "is_configured": true, 00:27:47.889 "data_offset": 2048, 00:27:47.889 "data_size": 63488 00:27:47.889 } 00:27:47.889 ] 00:27:47.889 }' 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:27:47.889 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=871 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:47.889 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.147 [2024-07-21 12:09:46.761236] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:27:48.147 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:48.147 "name": "raid_bdev1", 00:27:48.147 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:48.147 "strip_size_kb": 0, 00:27:48.147 "state": "online", 00:27:48.147 "raid_level": "raid1", 00:27:48.147 "superblock": true, 00:27:48.147 "num_base_bdevs": 2, 00:27:48.147 "num_base_bdevs_discovered": 2, 00:27:48.147 "num_base_bdevs_operational": 2, 00:27:48.147 "process": { 00:27:48.147 "type": "rebuild", 00:27:48.147 "target": "spare", 00:27:48.147 "progress": { 00:27:48.147 "blocks": 20480, 00:27:48.147 "percent": 32 00:27:48.147 } 00:27:48.147 }, 00:27:48.147 "base_bdevs_list": [ 00:27:48.147 { 00:27:48.147 "name": "spare", 00:27:48.147 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:48.147 "is_configured": true, 00:27:48.147 "data_offset": 2048, 00:27:48.147 "data_size": 63488 00:27:48.147 }, 00:27:48.147 { 00:27:48.147 "name": "BaseBdev2", 00:27:48.147 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:48.147 "is_configured": true, 00:27:48.147 "data_offset": 2048, 00:27:48.147 "data_size": 63488 00:27:48.147 } 00:27:48.147 ] 00:27:48.147 }' 00:27:48.147 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:48.147 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:48.147 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:48.147 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:48.147 12:09:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:48.147 [2024-07-21 12:09:46.978323] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:27:48.714 [2024-07-21 12:09:47.318021] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:27:48.714 [2024-07-21 12:09:47.550441] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:49.280 12:09:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:49.280 [2024-07-21 12:09:48.010174] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:27:49.538 12:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.538 "name": "raid_bdev1", 00:27:49.538 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:49.538 "strip_size_kb": 0, 00:27:49.538 "state": "online", 00:27:49.538 "raid_level": "raid1", 00:27:49.538 "superblock": true, 00:27:49.538 "num_base_bdevs": 2, 00:27:49.538 "num_base_bdevs_discovered": 2, 00:27:49.538 "num_base_bdevs_operational": 2, 00:27:49.538 "process": { 00:27:49.538 "type": "rebuild", 00:27:49.538 "target": "spare", 00:27:49.538 "progress": { 00:27:49.538 "blocks": 36864, 00:27:49.538 "percent": 58 00:27:49.538 } 00:27:49.538 }, 00:27:49.538 "base_bdevs_list": [ 00:27:49.538 { 00:27:49.538 "name": "spare", 00:27:49.538 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:49.538 "is_configured": true, 00:27:49.538 "data_offset": 2048, 00:27:49.538 "data_size": 63488 00:27:49.538 }, 00:27:49.538 { 00:27:49.538 "name": "BaseBdev2", 00:27:49.538 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:49.538 "is_configured": true, 00:27:49.538 "data_offset": 2048, 00:27:49.538 "data_size": 63488 00:27:49.538 } 00:27:49.538 ] 00:27:49.538 }' 00:27:49.538 12:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:49.538 12:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:49.538 12:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:49.538 12:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:49.538 12:09:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:49.796 [2024-07-21 12:09:48.591277] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:27:50.055 [2024-07-21 12:09:48.705970] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:27:50.313 [2024-07-21 12:09:49.032135] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:27:50.571 [2024-07-21 12:09:49.241276] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:27:50.571 [2024-07-21 12:09:49.241620] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:50.571 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:50.829 [2024-07-21 12:09:49.554636] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:27:50.829 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:50.829 "name": "raid_bdev1", 00:27:50.829 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:50.829 "strip_size_kb": 0, 00:27:50.829 "state": "online", 00:27:50.829 "raid_level": "raid1", 00:27:50.829 "superblock": true, 00:27:50.829 "num_base_bdevs": 2, 00:27:50.829 "num_base_bdevs_discovered": 2, 00:27:50.829 "num_base_bdevs_operational": 2, 00:27:50.829 "process": { 00:27:50.829 "type": "rebuild", 00:27:50.829 "target": "spare", 00:27:50.829 "progress": { 00:27:50.829 "blocks": 59392, 00:27:50.829 "percent": 93 00:27:50.829 } 00:27:50.829 }, 00:27:50.829 "base_bdevs_list": [ 00:27:50.829 { 00:27:50.829 "name": "spare", 00:27:50.829 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:50.829 "is_configured": true, 00:27:50.829 "data_offset": 2048, 00:27:50.829 "data_size": 63488 00:27:50.829 }, 00:27:50.829 { 00:27:50.829 "name": "BaseBdev2", 00:27:50.829 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:50.829 "is_configured": true, 00:27:50.829 "data_offset": 2048, 00:27:50.829 "data_size": 63488 00:27:50.829 } 00:27:50.829 ] 00:27:50.829 }' 00:27:50.829 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:50.829 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:50.829 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:51.087 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:51.087 12:09:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:27:51.087 [2024-07-21 12:09:49.859510] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:27:51.087 [2024-07-21 12:09:49.901670] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:27:51.087 [2024-07-21 12:09:49.910255] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.022 12:09:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.280 "name": "raid_bdev1", 00:27:52.280 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:52.280 "strip_size_kb": 0, 00:27:52.280 "state": "online", 00:27:52.280 "raid_level": "raid1", 00:27:52.280 "superblock": true, 00:27:52.280 "num_base_bdevs": 2, 00:27:52.280 "num_base_bdevs_discovered": 2, 00:27:52.280 "num_base_bdevs_operational": 2, 00:27:52.280 "base_bdevs_list": [ 00:27:52.280 { 00:27:52.280 "name": "spare", 00:27:52.280 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:52.280 "is_configured": true, 00:27:52.280 "data_offset": 2048, 00:27:52.280 "data_size": 63488 00:27:52.280 }, 00:27:52.280 { 00:27:52.280 "name": "BaseBdev2", 00:27:52.280 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:52.280 "is_configured": true, 00:27:52.280 "data_offset": 2048, 00:27:52.280 "data_size": 63488 00:27:52.280 } 00:27:52.280 ] 00:27:52.280 }' 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.280 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:52.846 "name": "raid_bdev1", 00:27:52.846 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:52.846 "strip_size_kb": 0, 00:27:52.846 "state": "online", 00:27:52.846 "raid_level": "raid1", 00:27:52.846 "superblock": true, 00:27:52.846 "num_base_bdevs": 2, 00:27:52.846 "num_base_bdevs_discovered": 2, 00:27:52.846 "num_base_bdevs_operational": 2, 00:27:52.846 "base_bdevs_list": [ 00:27:52.846 { 00:27:52.846 "name": "spare", 00:27:52.846 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:52.846 "is_configured": true, 00:27:52.846 "data_offset": 2048, 00:27:52.846 "data_size": 63488 00:27:52.846 }, 00:27:52.846 { 00:27:52.846 "name": "BaseBdev2", 00:27:52.846 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:52.846 "is_configured": true, 00:27:52.846 "data_offset": 2048, 00:27:52.846 "data_size": 63488 00:27:52.846 } 00:27:52.846 ] 00:27:52.846 }' 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:52.846 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:53.104 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:53.104 "name": "raid_bdev1", 00:27:53.104 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:53.104 "strip_size_kb": 0, 00:27:53.104 "state": "online", 00:27:53.104 "raid_level": "raid1", 00:27:53.104 "superblock": true, 00:27:53.104 "num_base_bdevs": 2, 00:27:53.104 "num_base_bdevs_discovered": 2, 00:27:53.104 "num_base_bdevs_operational": 2, 00:27:53.104 "base_bdevs_list": [ 00:27:53.104 { 00:27:53.104 "name": "spare", 00:27:53.104 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:53.104 "is_configured": true, 00:27:53.104 "data_offset": 2048, 00:27:53.104 "data_size": 63488 00:27:53.104 }, 00:27:53.104 { 00:27:53.104 "name": "BaseBdev2", 00:27:53.104 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:53.104 "is_configured": true, 00:27:53.104 "data_offset": 2048, 00:27:53.104 "data_size": 63488 00:27:53.104 } 00:27:53.104 ] 00:27:53.104 }' 00:27:53.104 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:53.104 12:09:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:53.669 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:27:53.926 [2024-07-21 12:09:52.688476] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:53.926 [2024-07-21 12:09:52.688538] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:53.926 00:27:53.926 Latency(us) 00:27:53.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:53.927 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:27:53.927 raid_bdev1 : 12.07 111.67 335.00 0.00 0.00 11879.56 294.17 116773.24 00:27:53.927 =================================================================================================================== 00:27:53.927 Total : 111.67 335.00 0.00 0.00 11879.56 294.17 116773.24 00:27:53.927 [2024-07-21 12:09:52.736210] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:53.927 [2024-07-21 12:09:52.736273] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:53.927 [2024-07-21 12:09:52.736368] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:53.927 [2024-07-21 12:09:52.736385] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:27:53.927 0 00:27:53.927 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:53.927 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:54.185 12:09:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:27:54.443 /dev/nbd0 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:54.443 1+0 records in 00:27:54.443 1+0 records out 00:27:54.443 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420009 s, 9.8 MB/s 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev2 ']' 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:54.443 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:27:54.700 /dev/nbd1 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:54.700 1+0 records in 00:27:54.700 1+0 records out 00:27:54.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355779 s, 11.5 MB/s 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:54.700 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:27:54.956 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:27:54.956 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:54.956 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:27:54.956 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:54.956 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:54.956 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:54.956 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:55.213 12:09:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:27:55.471 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:27:55.728 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:27:55.986 [2024-07-21 12:09:54.596748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:27:55.986 [2024-07-21 12:09:54.596887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:55.986 [2024-07-21 12:09:54.596938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:55.986 [2024-07-21 12:09:54.596979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:55.986 [2024-07-21 12:09:54.599763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:55.986 [2024-07-21 12:09:54.599814] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:27:55.986 [2024-07-21 12:09:54.599936] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:27:55.986 [2024-07-21 12:09:54.600000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:55.986 [2024-07-21 12:09:54.600191] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:55.986 spare 00:27:55.986 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:55.986 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:55.987 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:55.987 [2024-07-21 12:09:54.700289] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:27:55.987 [2024-07-21 12:09:54.700547] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:55.987 [2024-07-21 12:09:54.700766] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:27:55.987 [2024-07-21 12:09:54.701336] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:27:55.987 [2024-07-21 12:09:54.701462] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:27:55.987 [2024-07-21 12:09:54.701753] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:56.245 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:56.245 "name": "raid_bdev1", 00:27:56.245 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:56.245 "strip_size_kb": 0, 00:27:56.245 "state": "online", 00:27:56.245 "raid_level": "raid1", 00:27:56.245 "superblock": true, 00:27:56.245 "num_base_bdevs": 2, 00:27:56.245 "num_base_bdevs_discovered": 2, 00:27:56.245 "num_base_bdevs_operational": 2, 00:27:56.245 "base_bdevs_list": [ 00:27:56.245 { 00:27:56.245 "name": "spare", 00:27:56.245 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:56.245 "is_configured": true, 00:27:56.245 "data_offset": 2048, 00:27:56.245 "data_size": 63488 00:27:56.245 }, 00:27:56.245 { 00:27:56.245 "name": "BaseBdev2", 00:27:56.245 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:56.245 "is_configured": true, 00:27:56.245 "data_offset": 2048, 00:27:56.245 "data_size": 63488 00:27:56.245 } 00:27:56.245 ] 00:27:56.245 }' 00:27:56.245 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:56.245 12:09:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:56.811 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:27:56.811 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:56.811 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:27:56.811 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:27:56.811 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:56.811 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:56.811 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.069 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:57.069 "name": "raid_bdev1", 00:27:57.069 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:57.069 "strip_size_kb": 0, 00:27:57.069 "state": "online", 00:27:57.069 "raid_level": "raid1", 00:27:57.069 "superblock": true, 00:27:57.069 "num_base_bdevs": 2, 00:27:57.069 "num_base_bdevs_discovered": 2, 00:27:57.069 "num_base_bdevs_operational": 2, 00:27:57.069 "base_bdevs_list": [ 00:27:57.069 { 00:27:57.069 "name": "spare", 00:27:57.069 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:57.069 "is_configured": true, 00:27:57.069 "data_offset": 2048, 00:27:57.069 "data_size": 63488 00:27:57.069 }, 00:27:57.069 { 00:27:57.069 "name": "BaseBdev2", 00:27:57.069 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:57.069 "is_configured": true, 00:27:57.069 "data_offset": 2048, 00:27:57.069 "data_size": 63488 00:27:57.069 } 00:27:57.069 ] 00:27:57.069 }' 00:27:57.069 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:57.069 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:27:57.069 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:57.069 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:27:57.069 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.069 12:09:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:27:57.327 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:27:57.327 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:27:57.585 [2024-07-21 12:09:56.290375] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:57.585 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.843 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:27:57.843 "name": "raid_bdev1", 00:27:57.843 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:57.843 "strip_size_kb": 0, 00:27:57.843 "state": "online", 00:27:57.843 "raid_level": "raid1", 00:27:57.843 "superblock": true, 00:27:57.843 "num_base_bdevs": 2, 00:27:57.843 "num_base_bdevs_discovered": 1, 00:27:57.843 "num_base_bdevs_operational": 1, 00:27:57.843 "base_bdevs_list": [ 00:27:57.843 { 00:27:57.843 "name": null, 00:27:57.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:57.843 "is_configured": false, 00:27:57.843 "data_offset": 2048, 00:27:57.843 "data_size": 63488 00:27:57.843 }, 00:27:57.843 { 00:27:57.843 "name": "BaseBdev2", 00:27:57.843 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:57.843 "is_configured": true, 00:27:57.843 "data_offset": 2048, 00:27:57.843 "data_size": 63488 00:27:57.843 } 00:27:57.843 ] 00:27:57.843 }' 00:27:57.843 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:27:57.843 12:09:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:27:58.408 12:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:27:58.667 [2024-07-21 12:09:57.359130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:58.667 [2024-07-21 12:09:57.359691] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:27:58.667 [2024-07-21 12:09:57.359855] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:27:58.667 [2024-07-21 12:09:57.359993] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:27:58.667 [2024-07-21 12:09:57.367853] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:27:58.667 [2024-07-21 12:09:57.370207] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:27:58.667 12:09:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:27:59.602 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:27:59.602 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:27:59.602 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:27:59.602 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:27:59.602 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:27:59.602 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:27:59.602 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:59.860 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:59.860 "name": "raid_bdev1", 00:27:59.860 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:27:59.860 "strip_size_kb": 0, 00:27:59.860 "state": "online", 00:27:59.860 "raid_level": "raid1", 00:27:59.860 "superblock": true, 00:27:59.860 "num_base_bdevs": 2, 00:27:59.860 "num_base_bdevs_discovered": 2, 00:27:59.860 "num_base_bdevs_operational": 2, 00:27:59.860 "process": { 00:27:59.860 "type": "rebuild", 00:27:59.860 "target": "spare", 00:27:59.860 "progress": { 00:27:59.860 "blocks": 24576, 00:27:59.860 "percent": 38 00:27:59.860 } 00:27:59.860 }, 00:27:59.860 "base_bdevs_list": [ 00:27:59.860 { 00:27:59.860 "name": "spare", 00:27:59.860 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:27:59.860 "is_configured": true, 00:27:59.860 "data_offset": 2048, 00:27:59.860 "data_size": 63488 00:27:59.860 }, 00:27:59.860 { 00:27:59.860 "name": "BaseBdev2", 00:27:59.860 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:27:59.860 "is_configured": true, 00:27:59.860 "data_offset": 2048, 00:27:59.860 "data_size": 63488 00:27:59.860 } 00:27:59.860 ] 00:27:59.860 }' 00:27:59.860 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:27:59.860 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:27:59.860 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:27:59.860 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:27:59.860 12:09:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:00.118 [2024-07-21 12:09:58.904581] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.118 [2024-07-21 12:09:58.981078] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:00.118 [2024-07-21 12:09:58.981448] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:00.118 [2024-07-21 12:09:58.981518] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:00.118 [2024-07-21 12:09:58.981644] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:00.375 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:00.643 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:00.643 "name": "raid_bdev1", 00:28:00.643 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:00.643 "strip_size_kb": 0, 00:28:00.643 "state": "online", 00:28:00.643 "raid_level": "raid1", 00:28:00.643 "superblock": true, 00:28:00.643 "num_base_bdevs": 2, 00:28:00.643 "num_base_bdevs_discovered": 1, 00:28:00.643 "num_base_bdevs_operational": 1, 00:28:00.643 "base_bdevs_list": [ 00:28:00.643 { 00:28:00.643 "name": null, 00:28:00.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:00.643 "is_configured": false, 00:28:00.643 "data_offset": 2048, 00:28:00.643 "data_size": 63488 00:28:00.643 }, 00:28:00.643 { 00:28:00.643 "name": "BaseBdev2", 00:28:00.643 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:00.643 "is_configured": true, 00:28:00.643 "data_offset": 2048, 00:28:00.643 "data_size": 63488 00:28:00.643 } 00:28:00.643 ] 00:28:00.643 }' 00:28:00.643 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:00.643 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:01.241 12:09:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:01.499 [2024-07-21 12:10:00.206526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:01.499 [2024-07-21 12:10:00.206982] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:01.499 [2024-07-21 12:10:00.207088] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:28:01.499 [2024-07-21 12:10:00.207465] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:01.499 [2024-07-21 12:10:00.208119] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:01.499 [2024-07-21 12:10:00.208302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:01.499 [2024-07-21 12:10:00.208574] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:01.499 [2024-07-21 12:10:00.208719] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:28:01.499 [2024-07-21 12:10:00.208825] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:28:01.499 [2024-07-21 12:10:00.209033] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:01.499 [2024-07-21 12:10:00.216544] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b410 00:28:01.499 spare 00:28:01.499 [2024-07-21 12:10:00.218910] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:01.499 12:10:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:28:02.432 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:02.432 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:02.432 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:02.432 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:02.432 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:02.432 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:02.432 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:02.688 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:02.688 "name": "raid_bdev1", 00:28:02.688 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:02.688 "strip_size_kb": 0, 00:28:02.688 "state": "online", 00:28:02.688 "raid_level": "raid1", 00:28:02.688 "superblock": true, 00:28:02.688 "num_base_bdevs": 2, 00:28:02.688 "num_base_bdevs_discovered": 2, 00:28:02.688 "num_base_bdevs_operational": 2, 00:28:02.688 "process": { 00:28:02.688 "type": "rebuild", 00:28:02.688 "target": "spare", 00:28:02.688 "progress": { 00:28:02.688 "blocks": 24576, 00:28:02.688 "percent": 38 00:28:02.688 } 00:28:02.688 }, 00:28:02.688 "base_bdevs_list": [ 00:28:02.688 { 00:28:02.688 "name": "spare", 00:28:02.688 "uuid": "151094d6-c0d2-580e-8350-dd102638a214", 00:28:02.688 "is_configured": true, 00:28:02.688 "data_offset": 2048, 00:28:02.688 "data_size": 63488 00:28:02.688 }, 00:28:02.688 { 00:28:02.688 "name": "BaseBdev2", 00:28:02.688 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:02.688 "is_configured": true, 00:28:02.688 "data_offset": 2048, 00:28:02.688 "data_size": 63488 00:28:02.688 } 00:28:02.688 ] 00:28:02.688 }' 00:28:02.688 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:02.688 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:02.688 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:02.946 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:02.946 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:02.946 [2024-07-21 12:10:01.802248] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:03.203 [2024-07-21 12:10:01.830794] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:03.203 [2024-07-21 12:10:01.831112] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:03.203 [2024-07-21 12:10:01.831248] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:03.203 [2024-07-21 12:10:01.831365] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:03.203 12:10:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:03.461 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:03.461 "name": "raid_bdev1", 00:28:03.461 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:03.461 "strip_size_kb": 0, 00:28:03.461 "state": "online", 00:28:03.461 "raid_level": "raid1", 00:28:03.461 "superblock": true, 00:28:03.461 "num_base_bdevs": 2, 00:28:03.461 "num_base_bdevs_discovered": 1, 00:28:03.461 "num_base_bdevs_operational": 1, 00:28:03.461 "base_bdevs_list": [ 00:28:03.461 { 00:28:03.461 "name": null, 00:28:03.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.461 "is_configured": false, 00:28:03.461 "data_offset": 2048, 00:28:03.461 "data_size": 63488 00:28:03.461 }, 00:28:03.461 { 00:28:03.461 "name": "BaseBdev2", 00:28:03.461 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:03.461 "is_configured": true, 00:28:03.461 "data_offset": 2048, 00:28:03.461 "data_size": 63488 00:28:03.461 } 00:28:03.461 ] 00:28:03.461 }' 00:28:03.461 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:03.461 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:04.027 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:04.027 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:04.027 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:04.027 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:04.027 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:04.027 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:04.027 12:10:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:04.290 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:04.290 "name": "raid_bdev1", 00:28:04.290 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:04.290 "strip_size_kb": 0, 00:28:04.290 "state": "online", 00:28:04.290 "raid_level": "raid1", 00:28:04.290 "superblock": true, 00:28:04.290 "num_base_bdevs": 2, 00:28:04.290 "num_base_bdevs_discovered": 1, 00:28:04.290 "num_base_bdevs_operational": 1, 00:28:04.290 "base_bdevs_list": [ 00:28:04.290 { 00:28:04.290 "name": null, 00:28:04.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.290 "is_configured": false, 00:28:04.290 "data_offset": 2048, 00:28:04.290 "data_size": 63488 00:28:04.290 }, 00:28:04.290 { 00:28:04.290 "name": "BaseBdev2", 00:28:04.291 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:04.291 "is_configured": true, 00:28:04.291 "data_offset": 2048, 00:28:04.291 "data_size": 63488 00:28:04.291 } 00:28:04.291 ] 00:28:04.291 }' 00:28:04.291 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:04.291 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:04.291 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:04.291 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:04.291 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:28:04.550 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:04.808 [2024-07-21 12:10:03.651308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:04.808 [2024-07-21 12:10:03.651797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:04.808 [2024-07-21 12:10:03.651999] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:28:04.809 [2024-07-21 12:10:03.652144] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:04.809 [2024-07-21 12:10:03.652830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:04.809 [2024-07-21 12:10:03.653003] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:04.809 [2024-07-21 12:10:03.653218] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:28:04.809 [2024-07-21 12:10:03.653341] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:04.809 [2024-07-21 12:10:03.653446] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:04.809 BaseBdev1 00:28:04.809 12:10:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:06.183 "name": "raid_bdev1", 00:28:06.183 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:06.183 "strip_size_kb": 0, 00:28:06.183 "state": "online", 00:28:06.183 "raid_level": "raid1", 00:28:06.183 "superblock": true, 00:28:06.183 "num_base_bdevs": 2, 00:28:06.183 "num_base_bdevs_discovered": 1, 00:28:06.183 "num_base_bdevs_operational": 1, 00:28:06.183 "base_bdevs_list": [ 00:28:06.183 { 00:28:06.183 "name": null, 00:28:06.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.183 "is_configured": false, 00:28:06.183 "data_offset": 2048, 00:28:06.183 "data_size": 63488 00:28:06.183 }, 00:28:06.183 { 00:28:06.183 "name": "BaseBdev2", 00:28:06.183 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:06.183 "is_configured": true, 00:28:06.183 "data_offset": 2048, 00:28:06.183 "data_size": 63488 00:28:06.183 } 00:28:06.183 ] 00:28:06.183 }' 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:06.183 12:10:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:06.750 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:06.750 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:06.750 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:06.750 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:06.750 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:06.750 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:06.750 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:07.318 "name": "raid_bdev1", 00:28:07.318 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:07.318 "strip_size_kb": 0, 00:28:07.318 "state": "online", 00:28:07.318 "raid_level": "raid1", 00:28:07.318 "superblock": true, 00:28:07.318 "num_base_bdevs": 2, 00:28:07.318 "num_base_bdevs_discovered": 1, 00:28:07.318 "num_base_bdevs_operational": 1, 00:28:07.318 "base_bdevs_list": [ 00:28:07.318 { 00:28:07.318 "name": null, 00:28:07.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.318 "is_configured": false, 00:28:07.318 "data_offset": 2048, 00:28:07.318 "data_size": 63488 00:28:07.318 }, 00:28:07.318 { 00:28:07.318 "name": "BaseBdev2", 00:28:07.318 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:07.318 "is_configured": true, 00:28:07.318 "data_offset": 2048, 00:28:07.318 "data_size": 63488 00:28:07.318 } 00:28:07.318 ] 00:28:07.318 }' 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:28:07.318 12:10:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:28:07.576 [2024-07-21 12:10:06.196175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:07.576 [2024-07-21 12:10:06.196764] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:28:07.576 [2024-07-21 12:10:06.196938] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:28:07.576 request: 00:28:07.576 { 00:28:07.576 "raid_bdev": "raid_bdev1", 00:28:07.577 "base_bdev": "BaseBdev1", 00:28:07.577 "method": "bdev_raid_add_base_bdev", 00:28:07.577 "req_id": 1 00:28:07.577 } 00:28:07.577 Got JSON-RPC error response 00:28:07.577 response: 00:28:07.577 { 00:28:07.577 "code": -22, 00:28:07.577 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:28:07.577 } 00:28:07.577 12:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:28:07.577 12:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:07.577 12:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:07.577 12:10:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:07.577 12:10:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:08.513 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:08.772 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:08.772 "name": "raid_bdev1", 00:28:08.772 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:08.772 "strip_size_kb": 0, 00:28:08.772 "state": "online", 00:28:08.772 "raid_level": "raid1", 00:28:08.772 "superblock": true, 00:28:08.772 "num_base_bdevs": 2, 00:28:08.772 "num_base_bdevs_discovered": 1, 00:28:08.772 "num_base_bdevs_operational": 1, 00:28:08.772 "base_bdevs_list": [ 00:28:08.772 { 00:28:08.772 "name": null, 00:28:08.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.772 "is_configured": false, 00:28:08.772 "data_offset": 2048, 00:28:08.772 "data_size": 63488 00:28:08.772 }, 00:28:08.772 { 00:28:08.772 "name": "BaseBdev2", 00:28:08.772 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:08.772 "is_configured": true, 00:28:08.772 "data_offset": 2048, 00:28:08.772 "data_size": 63488 00:28:08.772 } 00:28:08.772 ] 00:28:08.772 }' 00:28:08.772 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:08.772 12:10:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:09.339 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:09.339 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:09.339 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:09.339 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:09.339 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:09.339 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:09.339 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:09.599 "name": "raid_bdev1", 00:28:09.599 "uuid": "384ec88f-d046-424c-a024-05716fa84932", 00:28:09.599 "strip_size_kb": 0, 00:28:09.599 "state": "online", 00:28:09.599 "raid_level": "raid1", 00:28:09.599 "superblock": true, 00:28:09.599 "num_base_bdevs": 2, 00:28:09.599 "num_base_bdevs_discovered": 1, 00:28:09.599 "num_base_bdevs_operational": 1, 00:28:09.599 "base_bdevs_list": [ 00:28:09.599 { 00:28:09.599 "name": null, 00:28:09.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:09.599 "is_configured": false, 00:28:09.599 "data_offset": 2048, 00:28:09.599 "data_size": 63488 00:28:09.599 }, 00:28:09.599 { 00:28:09.599 "name": "BaseBdev2", 00:28:09.599 "uuid": "290738e2-c427-52ed-aacd-8f10d486706f", 00:28:09.599 "is_configured": true, 00:28:09.599 "data_offset": 2048, 00:28:09.599 "data_size": 63488 00:28:09.599 } 00:28:09.599 ] 00:28:09.599 }' 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 156426 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 156426 ']' 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 156426 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:09.599 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 156426 00:28:09.874 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:09.874 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:09.874 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 156426' 00:28:09.874 killing process with pid 156426 00:28:09.874 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 156426 00:28:09.874 Received shutdown signal, test time was about 27.807720 seconds 00:28:09.874 00:28:09.874 Latency(us) 00:28:09.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.874 =================================================================================================================== 00:28:09.874 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.874 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 156426 00:28:09.874 [2024-07-21 12:10:08.468882] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:09.874 [2024-07-21 12:10:08.469312] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:09.874 [2024-07-21 12:10:08.469498] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:09.874 [2024-07-21 12:10:08.469611] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:28:09.874 [2024-07-21 12:10:08.503628] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:28:10.133 00:28:10.133 real 0m32.611s 00:28:10.133 user 0m52.816s 00:28:10.133 sys 0m3.250s 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:28:10.133 ************************************ 00:28:10.133 END TEST raid_rebuild_test_sb_io 00:28:10.133 ************************************ 00:28:10.133 12:10:08 bdev_raid -- bdev/bdev_raid.sh@876 -- # for n in 2 4 00:28:10.133 12:10:08 bdev_raid -- bdev/bdev_raid.sh@877 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:28:10.133 12:10:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:10.133 12:10:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:10.133 12:10:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:10.133 ************************************ 00:28:10.133 START TEST raid_rebuild_test 00:28:10.133 ************************************ 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false false true 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:10.133 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=157305 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 157305 /var/tmp/spdk-raid.sock 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 157305 ']' 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:10.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:10.134 12:10:08 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:10.134 [2024-07-21 12:10:08.977003] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:10.134 [2024-07-21 12:10:08.977516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157305 ] 00:28:10.134 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:10.134 Zero copy mechanism will not be used. 00:28:10.392 [2024-07-21 12:10:09.136045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.392 [2024-07-21 12:10:09.253167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.650 [2024-07-21 12:10:09.327932] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:11.216 12:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:11.216 12:10:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:28:11.216 12:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:11.216 12:10:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:11.474 BaseBdev1_malloc 00:28:11.474 12:10:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:11.733 [2024-07-21 12:10:10.415885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:11.733 [2024-07-21 12:10:10.416398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:11.733 [2024-07-21 12:10:10.416652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:28:11.733 [2024-07-21 12:10:10.416870] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:11.733 [2024-07-21 12:10:10.419786] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:11.733 [2024-07-21 12:10:10.419976] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:11.733 BaseBdev1 00:28:11.733 12:10:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:11.733 12:10:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:11.992 BaseBdev2_malloc 00:28:11.992 12:10:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:12.251 [2024-07-21 12:10:10.904247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:12.251 [2024-07-21 12:10:10.904686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.251 [2024-07-21 12:10:10.904927] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:12.251 [2024-07-21 12:10:10.905136] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.251 [2024-07-21 12:10:10.908147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.251 [2024-07-21 12:10:10.908330] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:12.251 BaseBdev2 00:28:12.251 12:10:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:12.251 12:10:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:12.510 BaseBdev3_malloc 00:28:12.510 12:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:12.781 [2024-07-21 12:10:11.381464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:12.781 [2024-07-21 12:10:11.381935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:12.781 [2024-07-21 12:10:11.382030] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:12.781 [2024-07-21 12:10:11.382412] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:12.781 [2024-07-21 12:10:11.385212] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:12.781 [2024-07-21 12:10:11.385399] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:12.781 BaseBdev3 00:28:12.781 12:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:12.781 12:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:13.040 BaseBdev4_malloc 00:28:13.040 12:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:13.040 [2024-07-21 12:10:11.868865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:13.040 [2024-07-21 12:10:11.869362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.040 [2024-07-21 12:10:11.869526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:13.040 [2024-07-21 12:10:11.869679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.040 [2024-07-21 12:10:11.872354] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.040 [2024-07-21 12:10:11.872536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:13.040 BaseBdev4 00:28:13.040 12:10:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:13.298 spare_malloc 00:28:13.299 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:13.557 spare_delay 00:28:13.557 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:13.816 [2024-07-21 12:10:12.604003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:13.816 [2024-07-21 12:10:12.604460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:13.816 [2024-07-21 12:10:12.604620] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:13.816 [2024-07-21 12:10:12.604774] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:13.816 [2024-07-21 12:10:12.607487] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:13.816 [2024-07-21 12:10:12.607667] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:13.816 spare 00:28:13.816 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:14.074 [2024-07-21 12:10:12.872068] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:14.074 [2024-07-21 12:10:12.874367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:14.074 [2024-07-21 12:10:12.874581] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:14.074 [2024-07-21 12:10:12.874717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:14.074 [2024-07-21 12:10:12.874923] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:28:14.074 [2024-07-21 12:10:12.875021] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:14.074 [2024-07-21 12:10:12.875326] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:14.074 [2024-07-21 12:10:12.875875] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:28:14.074 [2024-07-21 12:10:12.876005] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:28:14.074 [2024-07-21 12:10:12.876318] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:14.074 12:10:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:14.332 12:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:14.332 "name": "raid_bdev1", 00:28:14.332 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:14.332 "strip_size_kb": 0, 00:28:14.332 "state": "online", 00:28:14.332 "raid_level": "raid1", 00:28:14.332 "superblock": false, 00:28:14.332 "num_base_bdevs": 4, 00:28:14.332 "num_base_bdevs_discovered": 4, 00:28:14.332 "num_base_bdevs_operational": 4, 00:28:14.332 "base_bdevs_list": [ 00:28:14.332 { 00:28:14.332 "name": "BaseBdev1", 00:28:14.332 "uuid": "09454bd8-0def-5d67-a51b-c537047a52f0", 00:28:14.332 "is_configured": true, 00:28:14.332 "data_offset": 0, 00:28:14.332 "data_size": 65536 00:28:14.332 }, 00:28:14.332 { 00:28:14.332 "name": "BaseBdev2", 00:28:14.332 "uuid": "98094e86-878b-5369-ad00-0b7cdb85c704", 00:28:14.332 "is_configured": true, 00:28:14.332 "data_offset": 0, 00:28:14.332 "data_size": 65536 00:28:14.332 }, 00:28:14.332 { 00:28:14.332 "name": "BaseBdev3", 00:28:14.332 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:14.332 "is_configured": true, 00:28:14.332 "data_offset": 0, 00:28:14.332 "data_size": 65536 00:28:14.332 }, 00:28:14.332 { 00:28:14.332 "name": "BaseBdev4", 00:28:14.332 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:14.332 "is_configured": true, 00:28:14.332 "data_offset": 0, 00:28:14.332 "data_size": 65536 00:28:14.332 } 00:28:14.332 ] 00:28:14.332 }' 00:28:14.332 12:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:14.332 12:10:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:14.897 12:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:14.897 12:10:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:15.154 [2024-07-21 12:10:13.988913] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.154 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:28:15.154 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:15.154 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:15.721 [2024-07-21 12:10:14.520909] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:15.721 /dev/nbd0 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:15.721 1+0 records in 00:28:15.721 1+0 records out 00:28:15.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00074317 s, 5.5 MB/s 00:28:15.721 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:15.980 12:10:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:28:24.096 65536+0 records in 00:28:24.096 65536+0 records out 00:28:24.096 33554432 bytes (34 MB, 32 MiB) copied, 6.93298 s, 4.8 MB/s 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:24.096 [2024-07-21 12:10:21.813252] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:24.096 12:10:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:24.096 [2024-07-21 12:10:22.044900] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:24.096 "name": "raid_bdev1", 00:28:24.096 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:24.096 "strip_size_kb": 0, 00:28:24.096 "state": "online", 00:28:24.096 "raid_level": "raid1", 00:28:24.096 "superblock": false, 00:28:24.096 "num_base_bdevs": 4, 00:28:24.096 "num_base_bdevs_discovered": 3, 00:28:24.096 "num_base_bdevs_operational": 3, 00:28:24.096 "base_bdevs_list": [ 00:28:24.096 { 00:28:24.096 "name": null, 00:28:24.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:24.096 "is_configured": false, 00:28:24.096 "data_offset": 0, 00:28:24.096 "data_size": 65536 00:28:24.096 }, 00:28:24.096 { 00:28:24.096 "name": "BaseBdev2", 00:28:24.096 "uuid": "98094e86-878b-5369-ad00-0b7cdb85c704", 00:28:24.096 "is_configured": true, 00:28:24.096 "data_offset": 0, 00:28:24.096 "data_size": 65536 00:28:24.096 }, 00:28:24.096 { 00:28:24.096 "name": "BaseBdev3", 00:28:24.096 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:24.096 "is_configured": true, 00:28:24.096 "data_offset": 0, 00:28:24.096 "data_size": 65536 00:28:24.096 }, 00:28:24.096 { 00:28:24.096 "name": "BaseBdev4", 00:28:24.096 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:24.096 "is_configured": true, 00:28:24.096 "data_offset": 0, 00:28:24.096 "data_size": 65536 00:28:24.096 } 00:28:24.096 ] 00:28:24.096 }' 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.096 12:10:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:24.354 [2024-07-21 12:10:23.153183] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:24.354 [2024-07-21 12:10:23.159289] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09a30 00:28:24.354 [2024-07-21 12:10:23.161891] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:24.354 12:10:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:25.727 "name": "raid_bdev1", 00:28:25.727 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:25.727 "strip_size_kb": 0, 00:28:25.727 "state": "online", 00:28:25.727 "raid_level": "raid1", 00:28:25.727 "superblock": false, 00:28:25.727 "num_base_bdevs": 4, 00:28:25.727 "num_base_bdevs_discovered": 4, 00:28:25.727 "num_base_bdevs_operational": 4, 00:28:25.727 "process": { 00:28:25.727 "type": "rebuild", 00:28:25.727 "target": "spare", 00:28:25.727 "progress": { 00:28:25.727 "blocks": 24576, 00:28:25.727 "percent": 37 00:28:25.727 } 00:28:25.727 }, 00:28:25.727 "base_bdevs_list": [ 00:28:25.727 { 00:28:25.727 "name": "spare", 00:28:25.727 "uuid": "402d51f9-6b55-5622-9a1e-775edad2eb53", 00:28:25.727 "is_configured": true, 00:28:25.727 "data_offset": 0, 00:28:25.727 "data_size": 65536 00:28:25.727 }, 00:28:25.727 { 00:28:25.727 "name": "BaseBdev2", 00:28:25.727 "uuid": "98094e86-878b-5369-ad00-0b7cdb85c704", 00:28:25.727 "is_configured": true, 00:28:25.727 "data_offset": 0, 00:28:25.727 "data_size": 65536 00:28:25.727 }, 00:28:25.727 { 00:28:25.727 "name": "BaseBdev3", 00:28:25.727 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:25.727 "is_configured": true, 00:28:25.727 "data_offset": 0, 00:28:25.727 "data_size": 65536 00:28:25.727 }, 00:28:25.727 { 00:28:25.727 "name": "BaseBdev4", 00:28:25.727 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:25.727 "is_configured": true, 00:28:25.727 "data_offset": 0, 00:28:25.727 "data_size": 65536 00:28:25.727 } 00:28:25.727 ] 00:28:25.727 }' 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:25.727 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:25.985 [2024-07-21 12:10:24.796143] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:26.244 [2024-07-21 12:10:24.877027] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:26.244 [2024-07-21 12:10:24.877506] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:26.244 [2024-07-21 12:10:24.877648] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:26.244 [2024-07-21 12:10:24.877693] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:26.244 12:10:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:26.502 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:26.503 "name": "raid_bdev1", 00:28:26.503 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:26.503 "strip_size_kb": 0, 00:28:26.503 "state": "online", 00:28:26.503 "raid_level": "raid1", 00:28:26.503 "superblock": false, 00:28:26.503 "num_base_bdevs": 4, 00:28:26.503 "num_base_bdevs_discovered": 3, 00:28:26.503 "num_base_bdevs_operational": 3, 00:28:26.503 "base_bdevs_list": [ 00:28:26.503 { 00:28:26.503 "name": null, 00:28:26.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:26.503 "is_configured": false, 00:28:26.503 "data_offset": 0, 00:28:26.503 "data_size": 65536 00:28:26.503 }, 00:28:26.503 { 00:28:26.503 "name": "BaseBdev2", 00:28:26.503 "uuid": "98094e86-878b-5369-ad00-0b7cdb85c704", 00:28:26.503 "is_configured": true, 00:28:26.503 "data_offset": 0, 00:28:26.503 "data_size": 65536 00:28:26.503 }, 00:28:26.503 { 00:28:26.503 "name": "BaseBdev3", 00:28:26.503 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:26.503 "is_configured": true, 00:28:26.503 "data_offset": 0, 00:28:26.503 "data_size": 65536 00:28:26.503 }, 00:28:26.503 { 00:28:26.503 "name": "BaseBdev4", 00:28:26.503 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:26.503 "is_configured": true, 00:28:26.503 "data_offset": 0, 00:28:26.503 "data_size": 65536 00:28:26.503 } 00:28:26.503 ] 00:28:26.503 }' 00:28:26.503 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:26.503 12:10:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.069 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:27.069 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:27.069 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:27.069 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:27.069 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:27.069 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:27.069 12:10:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:27.327 12:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:27.327 "name": "raid_bdev1", 00:28:27.327 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:27.327 "strip_size_kb": 0, 00:28:27.327 "state": "online", 00:28:27.327 "raid_level": "raid1", 00:28:27.327 "superblock": false, 00:28:27.327 "num_base_bdevs": 4, 00:28:27.327 "num_base_bdevs_discovered": 3, 00:28:27.327 "num_base_bdevs_operational": 3, 00:28:27.327 "base_bdevs_list": [ 00:28:27.327 { 00:28:27.327 "name": null, 00:28:27.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:27.327 "is_configured": false, 00:28:27.327 "data_offset": 0, 00:28:27.327 "data_size": 65536 00:28:27.327 }, 00:28:27.327 { 00:28:27.327 "name": "BaseBdev2", 00:28:27.327 "uuid": "98094e86-878b-5369-ad00-0b7cdb85c704", 00:28:27.327 "is_configured": true, 00:28:27.327 "data_offset": 0, 00:28:27.327 "data_size": 65536 00:28:27.327 }, 00:28:27.327 { 00:28:27.327 "name": "BaseBdev3", 00:28:27.327 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:27.327 "is_configured": true, 00:28:27.327 "data_offset": 0, 00:28:27.327 "data_size": 65536 00:28:27.327 }, 00:28:27.327 { 00:28:27.327 "name": "BaseBdev4", 00:28:27.327 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:27.327 "is_configured": true, 00:28:27.327 "data_offset": 0, 00:28:27.327 "data_size": 65536 00:28:27.327 } 00:28:27.327 ] 00:28:27.327 }' 00:28:27.327 12:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:27.327 12:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:27.327 12:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:27.327 12:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:27.327 12:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:27.585 [2024-07-21 12:10:26.432900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:27.585 [2024-07-21 12:10:26.439156] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:28:27.585 [2024-07-21 12:10:26.441725] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:27.843 12:10:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:28.775 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:28.775 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:28.775 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:28.775 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:28.775 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:28.775 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:28.775 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.033 "name": "raid_bdev1", 00:28:29.033 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:29.033 "strip_size_kb": 0, 00:28:29.033 "state": "online", 00:28:29.033 "raid_level": "raid1", 00:28:29.033 "superblock": false, 00:28:29.033 "num_base_bdevs": 4, 00:28:29.033 "num_base_bdevs_discovered": 4, 00:28:29.033 "num_base_bdevs_operational": 4, 00:28:29.033 "process": { 00:28:29.033 "type": "rebuild", 00:28:29.033 "target": "spare", 00:28:29.033 "progress": { 00:28:29.033 "blocks": 24576, 00:28:29.033 "percent": 37 00:28:29.033 } 00:28:29.033 }, 00:28:29.033 "base_bdevs_list": [ 00:28:29.033 { 00:28:29.033 "name": "spare", 00:28:29.033 "uuid": "402d51f9-6b55-5622-9a1e-775edad2eb53", 00:28:29.033 "is_configured": true, 00:28:29.033 "data_offset": 0, 00:28:29.033 "data_size": 65536 00:28:29.033 }, 00:28:29.033 { 00:28:29.033 "name": "BaseBdev2", 00:28:29.033 "uuid": "98094e86-878b-5369-ad00-0b7cdb85c704", 00:28:29.033 "is_configured": true, 00:28:29.033 "data_offset": 0, 00:28:29.033 "data_size": 65536 00:28:29.033 }, 00:28:29.033 { 00:28:29.033 "name": "BaseBdev3", 00:28:29.033 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:29.033 "is_configured": true, 00:28:29.033 "data_offset": 0, 00:28:29.033 "data_size": 65536 00:28:29.033 }, 00:28:29.033 { 00:28:29.033 "name": "BaseBdev4", 00:28:29.033 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:29.033 "is_configured": true, 00:28:29.033 "data_offset": 0, 00:28:29.033 "data_size": 65536 00:28:29.033 } 00:28:29.033 ] 00:28:29.033 }' 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:29.033 12:10:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:29.291 [2024-07-21 12:10:28.080486] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:29.291 [2024-07-21 12:10:28.154568] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09bd0 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:29.561 "name": "raid_bdev1", 00:28:29.561 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:29.561 "strip_size_kb": 0, 00:28:29.561 "state": "online", 00:28:29.561 "raid_level": "raid1", 00:28:29.561 "superblock": false, 00:28:29.561 "num_base_bdevs": 4, 00:28:29.561 "num_base_bdevs_discovered": 3, 00:28:29.561 "num_base_bdevs_operational": 3, 00:28:29.561 "process": { 00:28:29.561 "type": "rebuild", 00:28:29.561 "target": "spare", 00:28:29.561 "progress": { 00:28:29.561 "blocks": 38912, 00:28:29.561 "percent": 59 00:28:29.561 } 00:28:29.561 }, 00:28:29.561 "base_bdevs_list": [ 00:28:29.561 { 00:28:29.561 "name": "spare", 00:28:29.561 "uuid": "402d51f9-6b55-5622-9a1e-775edad2eb53", 00:28:29.561 "is_configured": true, 00:28:29.561 "data_offset": 0, 00:28:29.561 "data_size": 65536 00:28:29.561 }, 00:28:29.561 { 00:28:29.561 "name": null, 00:28:29.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:29.561 "is_configured": false, 00:28:29.561 "data_offset": 0, 00:28:29.561 "data_size": 65536 00:28:29.561 }, 00:28:29.561 { 00:28:29.561 "name": "BaseBdev3", 00:28:29.561 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:29.561 "is_configured": true, 00:28:29.561 "data_offset": 0, 00:28:29.561 "data_size": 65536 00:28:29.561 }, 00:28:29.561 { 00:28:29.561 "name": "BaseBdev4", 00:28:29.561 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:29.561 "is_configured": true, 00:28:29.561 "data_offset": 0, 00:28:29.561 "data_size": 65536 00:28:29.561 } 00:28:29.561 ] 00:28:29.561 }' 00:28:29.561 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=913 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:29.819 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.077 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:30.077 "name": "raid_bdev1", 00:28:30.077 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:30.077 "strip_size_kb": 0, 00:28:30.077 "state": "online", 00:28:30.077 "raid_level": "raid1", 00:28:30.077 "superblock": false, 00:28:30.077 "num_base_bdevs": 4, 00:28:30.077 "num_base_bdevs_discovered": 3, 00:28:30.077 "num_base_bdevs_operational": 3, 00:28:30.077 "process": { 00:28:30.077 "type": "rebuild", 00:28:30.077 "target": "spare", 00:28:30.077 "progress": { 00:28:30.077 "blocks": 45056, 00:28:30.077 "percent": 68 00:28:30.077 } 00:28:30.077 }, 00:28:30.077 "base_bdevs_list": [ 00:28:30.077 { 00:28:30.077 "name": "spare", 00:28:30.077 "uuid": "402d51f9-6b55-5622-9a1e-775edad2eb53", 00:28:30.077 "is_configured": true, 00:28:30.077 "data_offset": 0, 00:28:30.077 "data_size": 65536 00:28:30.077 }, 00:28:30.077 { 00:28:30.077 "name": null, 00:28:30.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:30.077 "is_configured": false, 00:28:30.077 "data_offset": 0, 00:28:30.077 "data_size": 65536 00:28:30.077 }, 00:28:30.077 { 00:28:30.077 "name": "BaseBdev3", 00:28:30.077 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:30.077 "is_configured": true, 00:28:30.077 "data_offset": 0, 00:28:30.077 "data_size": 65536 00:28:30.077 }, 00:28:30.077 { 00:28:30.077 "name": "BaseBdev4", 00:28:30.077 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:30.077 "is_configured": true, 00:28:30.077 "data_offset": 0, 00:28:30.077 "data_size": 65536 00:28:30.078 } 00:28:30.078 ] 00:28:30.078 }' 00:28:30.078 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:30.078 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:30.078 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:30.078 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:30.078 12:10:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:31.014 [2024-07-21 12:10:29.666811] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:31.014 [2024-07-21 12:10:29.667279] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:31.014 [2024-07-21 12:10:29.667501] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.014 12:10:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.273 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.273 "name": "raid_bdev1", 00:28:31.273 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:31.273 "strip_size_kb": 0, 00:28:31.273 "state": "online", 00:28:31.273 "raid_level": "raid1", 00:28:31.273 "superblock": false, 00:28:31.273 "num_base_bdevs": 4, 00:28:31.273 "num_base_bdevs_discovered": 3, 00:28:31.273 "num_base_bdevs_operational": 3, 00:28:31.273 "base_bdevs_list": [ 00:28:31.273 { 00:28:31.273 "name": "spare", 00:28:31.273 "uuid": "402d51f9-6b55-5622-9a1e-775edad2eb53", 00:28:31.273 "is_configured": true, 00:28:31.273 "data_offset": 0, 00:28:31.273 "data_size": 65536 00:28:31.273 }, 00:28:31.273 { 00:28:31.273 "name": null, 00:28:31.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.273 "is_configured": false, 00:28:31.273 "data_offset": 0, 00:28:31.273 "data_size": 65536 00:28:31.273 }, 00:28:31.273 { 00:28:31.273 "name": "BaseBdev3", 00:28:31.273 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:31.273 "is_configured": true, 00:28:31.273 "data_offset": 0, 00:28:31.273 "data_size": 65536 00:28:31.273 }, 00:28:31.273 { 00:28:31.273 "name": "BaseBdev4", 00:28:31.273 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:31.273 "is_configured": true, 00:28:31.273 "data_offset": 0, 00:28:31.273 "data_size": 65536 00:28:31.273 } 00:28:31.273 ] 00:28:31.273 }' 00:28:31.273 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.273 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:31.273 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.532 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:31.791 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:31.791 "name": "raid_bdev1", 00:28:31.791 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:31.791 "strip_size_kb": 0, 00:28:31.791 "state": "online", 00:28:31.791 "raid_level": "raid1", 00:28:31.791 "superblock": false, 00:28:31.791 "num_base_bdevs": 4, 00:28:31.792 "num_base_bdevs_discovered": 3, 00:28:31.792 "num_base_bdevs_operational": 3, 00:28:31.792 "base_bdevs_list": [ 00:28:31.792 { 00:28:31.792 "name": "spare", 00:28:31.792 "uuid": "402d51f9-6b55-5622-9a1e-775edad2eb53", 00:28:31.792 "is_configured": true, 00:28:31.792 "data_offset": 0, 00:28:31.792 "data_size": 65536 00:28:31.792 }, 00:28:31.792 { 00:28:31.792 "name": null, 00:28:31.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:31.792 "is_configured": false, 00:28:31.792 "data_offset": 0, 00:28:31.792 "data_size": 65536 00:28:31.792 }, 00:28:31.792 { 00:28:31.792 "name": "BaseBdev3", 00:28:31.792 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:31.792 "is_configured": true, 00:28:31.792 "data_offset": 0, 00:28:31.792 "data_size": 65536 00:28:31.792 }, 00:28:31.792 { 00:28:31.792 "name": "BaseBdev4", 00:28:31.792 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:31.792 "is_configured": true, 00:28:31.792 "data_offset": 0, 00:28:31.792 "data_size": 65536 00:28:31.792 } 00:28:31.792 ] 00:28:31.792 }' 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:31.792 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:32.051 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:32.051 "name": "raid_bdev1", 00:28:32.051 "uuid": "e604f74d-1c30-4780-9d35-81cd0131af05", 00:28:32.051 "strip_size_kb": 0, 00:28:32.051 "state": "online", 00:28:32.051 "raid_level": "raid1", 00:28:32.051 "superblock": false, 00:28:32.051 "num_base_bdevs": 4, 00:28:32.051 "num_base_bdevs_discovered": 3, 00:28:32.051 "num_base_bdevs_operational": 3, 00:28:32.051 "base_bdevs_list": [ 00:28:32.051 { 00:28:32.051 "name": "spare", 00:28:32.051 "uuid": "402d51f9-6b55-5622-9a1e-775edad2eb53", 00:28:32.051 "is_configured": true, 00:28:32.051 "data_offset": 0, 00:28:32.051 "data_size": 65536 00:28:32.051 }, 00:28:32.051 { 00:28:32.051 "name": null, 00:28:32.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:32.051 "is_configured": false, 00:28:32.051 "data_offset": 0, 00:28:32.051 "data_size": 65536 00:28:32.051 }, 00:28:32.051 { 00:28:32.051 "name": "BaseBdev3", 00:28:32.051 "uuid": "391004f6-327d-5e50-a53a-118102660b08", 00:28:32.051 "is_configured": true, 00:28:32.051 "data_offset": 0, 00:28:32.051 "data_size": 65536 00:28:32.051 }, 00:28:32.051 { 00:28:32.051 "name": "BaseBdev4", 00:28:32.051 "uuid": "4f3d36f5-7879-5e3c-869e-a6a6220a5e49", 00:28:32.051 "is_configured": true, 00:28:32.051 "data_offset": 0, 00:28:32.051 "data_size": 65536 00:28:32.051 } 00:28:32.051 ] 00:28:32.051 }' 00:28:32.051 12:10:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:32.051 12:10:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.619 12:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:32.877 [2024-07-21 12:10:31.582527] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:32.877 [2024-07-21 12:10:31.582938] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:32.877 [2024-07-21 12:10:31.583201] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:32.877 [2024-07-21 12:10:31.583442] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:32.877 [2024-07-21 12:10:31.583554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:28:32.877 12:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:32.877 12:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:33.136 12:10:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:33.395 /dev/nbd0 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.395 1+0 records in 00:28:33.395 1+0 records out 00:28:33.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535748 s, 7.6 MB/s 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:33.395 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:33.654 /dev/nbd1 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # break 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:33.654 1+0 records in 00:28:33.654 1+0 records out 00:28:33.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587248 s, 7.0 MB/s 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:33.654 12:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:28:33.913 12:10:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:33.913 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:33.913 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:33.913 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:33.913 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:28:33.913 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:33.913 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:34.171 12:10:32 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 157305 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 157305 ']' 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 157305 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 157305 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 157305' 00:28:34.429 killing process with pid 157305 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@965 -- # kill 157305 00:28:34.429 Received shutdown signal, test time was about 60.000000 seconds 00:28:34.429 00:28:34.429 Latency(us) 00:28:34.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.429 =================================================================================================================== 00:28:34.429 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:34.429 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # wait 157305 00:28:34.429 [2024-07-21 12:10:33.170123] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:34.429 [2024-07-21 12:10:33.230198] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:34.994 12:10:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:28:34.995 00:28:34.995 real 0m24.649s 00:28:34.995 user 0m34.744s 00:28:34.995 sys 0m4.207s 00:28:34.995 ************************************ 00:28:34.995 END TEST raid_rebuild_test 00:28:34.995 ************************************ 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.995 12:10:33 bdev_raid -- bdev/bdev_raid.sh@878 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:28:34.995 12:10:33 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:28:34.995 12:10:33 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:34.995 12:10:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:34.995 ************************************ 00:28:34.995 START TEST raid_rebuild_test_sb 00:28:34.995 ************************************ 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true false true 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=157872 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 157872 /var/tmp/spdk-raid.sock 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 157872 ']' 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:28:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:34.995 12:10:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:34.995 [2024-07-21 12:10:33.682136] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:34.995 I/O size of 3145728 is greater than zero copy threshold (65536). 00:28:34.995 Zero copy mechanism will not be used. 00:28:34.995 [2024-07-21 12:10:33.682343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157872 ] 00:28:34.995 [2024-07-21 12:10:33.833622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.252 [2024-07-21 12:10:33.939634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.252 [2024-07-21 12:10:34.012388] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:35.818 12:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:35.818 12:10:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:28:35.818 12:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:35.818 12:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:36.077 BaseBdev1_malloc 00:28:36.077 12:10:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:28:36.335 [2024-07-21 12:10:35.143541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:28:36.335 [2024-07-21 12:10:35.143698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.335 [2024-07-21 12:10:35.143771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:28:36.335 [2024-07-21 12:10:35.143835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.335 [2024-07-21 12:10:35.146542] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.335 [2024-07-21 12:10:35.146622] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:36.335 BaseBdev1 00:28:36.335 12:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:36.335 12:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:36.593 BaseBdev2_malloc 00:28:36.593 12:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:28:36.851 [2024-07-21 12:10:35.617746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:28:36.851 [2024-07-21 12:10:35.617876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:36.851 [2024-07-21 12:10:35.617950] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:36.851 [2024-07-21 12:10:35.617994] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:36.851 [2024-07-21 12:10:35.620470] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:36.851 [2024-07-21 12:10:35.620519] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:36.851 BaseBdev2 00:28:36.851 12:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:36.851 12:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:37.110 BaseBdev3_malloc 00:28:37.110 12:10:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:28:37.369 [2024-07-21 12:10:36.072422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:28:37.369 [2024-07-21 12:10:36.072588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.369 [2024-07-21 12:10:36.072638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:28:37.369 [2024-07-21 12:10:36.072719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.369 [2024-07-21 12:10:36.075250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.369 [2024-07-21 12:10:36.075317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:37.369 BaseBdev3 00:28:37.369 12:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:28:37.369 12:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:28:37.626 BaseBdev4_malloc 00:28:37.626 12:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:28:37.884 [2024-07-21 12:10:36.502970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:28:37.884 [2024-07-21 12:10:36.503101] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:37.884 [2024-07-21 12:10:36.503162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:28:37.884 [2024-07-21 12:10:36.503214] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:37.884 [2024-07-21 12:10:36.505856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:37.884 [2024-07-21 12:10:36.505909] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:28:37.884 BaseBdev4 00:28:37.884 12:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:28:38.141 spare_malloc 00:28:38.141 12:10:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:28:38.141 spare_delay 00:28:38.141 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:38.403 [2024-07-21 12:10:37.193330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:38.403 [2024-07-21 12:10:37.193499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:38.403 [2024-07-21 12:10:37.193546] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:38.403 [2024-07-21 12:10:37.193594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:38.403 [2024-07-21 12:10:37.196331] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:38.403 [2024-07-21 12:10:37.196399] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:38.403 spare 00:28:38.403 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:28:38.673 [2024-07-21 12:10:37.421786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:38.673 [2024-07-21 12:10:37.424518] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:38.673 [2024-07-21 12:10:37.424621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:38.673 [2024-07-21 12:10:37.424721] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:38.673 [2024-07-21 12:10:37.425022] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:28:38.673 [2024-07-21 12:10:37.425046] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:38.673 [2024-07-21 12:10:37.425301] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:28:38.673 [2024-07-21 12:10:37.425858] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:28:38.673 [2024-07-21 12:10:37.425879] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:28:38.673 [2024-07-21 12:10:37.426157] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:38.673 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:38.950 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:38.950 "name": "raid_bdev1", 00:28:38.950 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:38.950 "strip_size_kb": 0, 00:28:38.950 "state": "online", 00:28:38.950 "raid_level": "raid1", 00:28:38.950 "superblock": true, 00:28:38.950 "num_base_bdevs": 4, 00:28:38.950 "num_base_bdevs_discovered": 4, 00:28:38.950 "num_base_bdevs_operational": 4, 00:28:38.950 "base_bdevs_list": [ 00:28:38.950 { 00:28:38.950 "name": "BaseBdev1", 00:28:38.950 "uuid": "bf8fff32-9d21-5221-8802-02e01c9c3b7e", 00:28:38.950 "is_configured": true, 00:28:38.950 "data_offset": 2048, 00:28:38.950 "data_size": 63488 00:28:38.950 }, 00:28:38.950 { 00:28:38.950 "name": "BaseBdev2", 00:28:38.950 "uuid": "dfb6f879-3685-5c97-9660-d3c2db3b5cb0", 00:28:38.950 "is_configured": true, 00:28:38.950 "data_offset": 2048, 00:28:38.950 "data_size": 63488 00:28:38.950 }, 00:28:38.950 { 00:28:38.950 "name": "BaseBdev3", 00:28:38.950 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:38.950 "is_configured": true, 00:28:38.950 "data_offset": 2048, 00:28:38.950 "data_size": 63488 00:28:38.950 }, 00:28:38.950 { 00:28:38.950 "name": "BaseBdev4", 00:28:38.950 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:38.950 "is_configured": true, 00:28:38.950 "data_offset": 2048, 00:28:38.950 "data_size": 63488 00:28:38.950 } 00:28:38.950 ] 00:28:38.950 }' 00:28:38.950 12:10:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:38.950 12:10:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:39.527 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:28:39.527 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:28:39.786 [2024-07-21 12:10:38.574677] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:39.786 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:28:39.786 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:39.786 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.044 12:10:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:28:40.303 [2024-07-21 12:10:39.006702] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:40.303 /dev/nbd0 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:40.303 1+0 records in 00:28:40.303 1+0 records out 00:28:40.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319573 s, 12.8 MB/s 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:28:40.303 12:10:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:28:48.421 63488+0 records in 00:28:48.421 63488+0 records out 00:28:48.421 32505856 bytes (33 MB, 31 MiB) copied, 6.75918 s, 4.8 MB/s 00:28:48.421 12:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:28:48.421 12:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:48.421 12:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:28:48.421 12:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:48.421 12:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:48.421 12:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:48.421 12:10:45 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:48.421 [2024-07-21 12:10:46.090970] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:28:48.421 [2024-07-21 12:10:46.290700] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:48.421 "name": "raid_bdev1", 00:28:48.421 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:48.421 "strip_size_kb": 0, 00:28:48.421 "state": "online", 00:28:48.421 "raid_level": "raid1", 00:28:48.421 "superblock": true, 00:28:48.421 "num_base_bdevs": 4, 00:28:48.421 "num_base_bdevs_discovered": 3, 00:28:48.421 "num_base_bdevs_operational": 3, 00:28:48.421 "base_bdevs_list": [ 00:28:48.421 { 00:28:48.421 "name": null, 00:28:48.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.421 "is_configured": false, 00:28:48.421 "data_offset": 2048, 00:28:48.421 "data_size": 63488 00:28:48.421 }, 00:28:48.421 { 00:28:48.421 "name": "BaseBdev2", 00:28:48.421 "uuid": "dfb6f879-3685-5c97-9660-d3c2db3b5cb0", 00:28:48.421 "is_configured": true, 00:28:48.421 "data_offset": 2048, 00:28:48.421 "data_size": 63488 00:28:48.421 }, 00:28:48.421 { 00:28:48.421 "name": "BaseBdev3", 00:28:48.421 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:48.421 "is_configured": true, 00:28:48.421 "data_offset": 2048, 00:28:48.421 "data_size": 63488 00:28:48.421 }, 00:28:48.421 { 00:28:48.421 "name": "BaseBdev4", 00:28:48.421 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:48.421 "is_configured": true, 00:28:48.421 "data_offset": 2048, 00:28:48.421 "data_size": 63488 00:28:48.421 } 00:28:48.421 ] 00:28:48.421 }' 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:48.421 12:10:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.422 12:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:48.680 [2024-07-21 12:10:47.426954] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:48.680 [2024-07-21 12:10:47.432622] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca31c0 00:28:48.680 [2024-07-21 12:10:47.434949] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:48.680 12:10:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:28:49.616 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:49.616 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:49.616 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:49.616 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:49.616 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:49.616 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:49.616 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:49.875 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:49.876 "name": "raid_bdev1", 00:28:49.876 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:49.876 "strip_size_kb": 0, 00:28:49.876 "state": "online", 00:28:49.876 "raid_level": "raid1", 00:28:49.876 "superblock": true, 00:28:49.876 "num_base_bdevs": 4, 00:28:49.876 "num_base_bdevs_discovered": 4, 00:28:49.876 "num_base_bdevs_operational": 4, 00:28:49.876 "process": { 00:28:49.876 "type": "rebuild", 00:28:49.876 "target": "spare", 00:28:49.876 "progress": { 00:28:49.876 "blocks": 24576, 00:28:49.876 "percent": 38 00:28:49.876 } 00:28:49.876 }, 00:28:49.876 "base_bdevs_list": [ 00:28:49.876 { 00:28:49.876 "name": "spare", 00:28:49.876 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:49.876 "is_configured": true, 00:28:49.876 "data_offset": 2048, 00:28:49.876 "data_size": 63488 00:28:49.876 }, 00:28:49.876 { 00:28:49.876 "name": "BaseBdev2", 00:28:49.876 "uuid": "dfb6f879-3685-5c97-9660-d3c2db3b5cb0", 00:28:49.876 "is_configured": true, 00:28:49.876 "data_offset": 2048, 00:28:49.876 "data_size": 63488 00:28:49.876 }, 00:28:49.876 { 00:28:49.876 "name": "BaseBdev3", 00:28:49.876 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:49.876 "is_configured": true, 00:28:49.876 "data_offset": 2048, 00:28:49.876 "data_size": 63488 00:28:49.876 }, 00:28:49.876 { 00:28:49.876 "name": "BaseBdev4", 00:28:49.876 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:49.876 "is_configured": true, 00:28:49.876 "data_offset": 2048, 00:28:49.876 "data_size": 63488 00:28:49.876 } 00:28:49.876 ] 00:28:49.876 }' 00:28:49.876 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:50.134 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:50.134 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:50.134 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:50.134 12:10:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:28:50.394 [2024-07-21 12:10:49.046205] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.394 [2024-07-21 12:10:49.048357] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:28:50.394 [2024-07-21 12:10:49.048461] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.394 [2024-07-21 12:10:49.048484] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:28:50.394 [2024-07-21 12:10:49.048493] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:50.394 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:50.653 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:50.653 "name": "raid_bdev1", 00:28:50.653 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:50.653 "strip_size_kb": 0, 00:28:50.653 "state": "online", 00:28:50.653 "raid_level": "raid1", 00:28:50.653 "superblock": true, 00:28:50.653 "num_base_bdevs": 4, 00:28:50.653 "num_base_bdevs_discovered": 3, 00:28:50.653 "num_base_bdevs_operational": 3, 00:28:50.653 "base_bdevs_list": [ 00:28:50.653 { 00:28:50.653 "name": null, 00:28:50.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.653 "is_configured": false, 00:28:50.653 "data_offset": 2048, 00:28:50.653 "data_size": 63488 00:28:50.653 }, 00:28:50.653 { 00:28:50.653 "name": "BaseBdev2", 00:28:50.653 "uuid": "dfb6f879-3685-5c97-9660-d3c2db3b5cb0", 00:28:50.653 "is_configured": true, 00:28:50.653 "data_offset": 2048, 00:28:50.653 "data_size": 63488 00:28:50.653 }, 00:28:50.653 { 00:28:50.653 "name": "BaseBdev3", 00:28:50.653 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:50.653 "is_configured": true, 00:28:50.653 "data_offset": 2048, 00:28:50.653 "data_size": 63488 00:28:50.653 }, 00:28:50.653 { 00:28:50.653 "name": "BaseBdev4", 00:28:50.653 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:50.653 "is_configured": true, 00:28:50.653 "data_offset": 2048, 00:28:50.653 "data_size": 63488 00:28:50.653 } 00:28:50.653 ] 00:28:50.653 }' 00:28:50.653 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:50.653 12:10:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.221 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:51.221 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:51.221 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:51.221 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:51.221 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:51.221 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:51.221 12:10:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:51.480 12:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.480 "name": "raid_bdev1", 00:28:51.480 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:51.480 "strip_size_kb": 0, 00:28:51.480 "state": "online", 00:28:51.480 "raid_level": "raid1", 00:28:51.480 "superblock": true, 00:28:51.480 "num_base_bdevs": 4, 00:28:51.480 "num_base_bdevs_discovered": 3, 00:28:51.480 "num_base_bdevs_operational": 3, 00:28:51.480 "base_bdevs_list": [ 00:28:51.480 { 00:28:51.480 "name": null, 00:28:51.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.480 "is_configured": false, 00:28:51.480 "data_offset": 2048, 00:28:51.480 "data_size": 63488 00:28:51.480 }, 00:28:51.480 { 00:28:51.480 "name": "BaseBdev2", 00:28:51.480 "uuid": "dfb6f879-3685-5c97-9660-d3c2db3b5cb0", 00:28:51.480 "is_configured": true, 00:28:51.480 "data_offset": 2048, 00:28:51.480 "data_size": 63488 00:28:51.480 }, 00:28:51.480 { 00:28:51.480 "name": "BaseBdev3", 00:28:51.480 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:51.480 "is_configured": true, 00:28:51.480 "data_offset": 2048, 00:28:51.480 "data_size": 63488 00:28:51.480 }, 00:28:51.480 { 00:28:51.480 "name": "BaseBdev4", 00:28:51.480 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:51.480 "is_configured": true, 00:28:51.480 "data_offset": 2048, 00:28:51.480 "data_size": 63488 00:28:51.480 } 00:28:51.480 ] 00:28:51.480 }' 00:28:51.480 12:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:51.480 12:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:51.480 12:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:51.480 12:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:51.480 12:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:28:51.739 [2024-07-21 12:10:50.455715] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:51.739 [2024-07-21 12:10:50.461414] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:28:51.739 [2024-07-21 12:10:50.463775] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:28:51.739 12:10:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:28:52.674 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:52.674 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:52.674 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:52.674 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:52.674 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:52.674 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:52.674 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:52.933 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:52.933 "name": "raid_bdev1", 00:28:52.933 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:52.933 "strip_size_kb": 0, 00:28:52.933 "state": "online", 00:28:52.933 "raid_level": "raid1", 00:28:52.933 "superblock": true, 00:28:52.933 "num_base_bdevs": 4, 00:28:52.933 "num_base_bdevs_discovered": 4, 00:28:52.933 "num_base_bdevs_operational": 4, 00:28:52.933 "process": { 00:28:52.933 "type": "rebuild", 00:28:52.933 "target": "spare", 00:28:52.933 "progress": { 00:28:52.933 "blocks": 24576, 00:28:52.933 "percent": 38 00:28:52.933 } 00:28:52.933 }, 00:28:52.933 "base_bdevs_list": [ 00:28:52.933 { 00:28:52.933 "name": "spare", 00:28:52.933 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:52.933 "is_configured": true, 00:28:52.933 "data_offset": 2048, 00:28:52.933 "data_size": 63488 00:28:52.933 }, 00:28:52.933 { 00:28:52.933 "name": "BaseBdev2", 00:28:52.933 "uuid": "dfb6f879-3685-5c97-9660-d3c2db3b5cb0", 00:28:52.933 "is_configured": true, 00:28:52.933 "data_offset": 2048, 00:28:52.933 "data_size": 63488 00:28:52.933 }, 00:28:52.933 { 00:28:52.933 "name": "BaseBdev3", 00:28:52.933 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:52.933 "is_configured": true, 00:28:52.933 "data_offset": 2048, 00:28:52.933 "data_size": 63488 00:28:52.933 }, 00:28:52.933 { 00:28:52.933 "name": "BaseBdev4", 00:28:52.933 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:52.933 "is_configured": true, 00:28:52.933 "data_offset": 2048, 00:28:52.933 "data_size": 63488 00:28:52.933 } 00:28:52.933 ] 00:28:52.933 }' 00:28:52.933 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:52.933 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:52.933 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.191 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.191 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:28:53.191 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:28:53.191 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:28:53.191 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:28:53.191 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:28:53.191 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:28:53.191 12:10:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:28:53.450 [2024-07-21 12:10:52.098054] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:53.450 [2024-07-21 12:10:52.275340] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca3360 00:28:53.450 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:28:53.450 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:28:53.450 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.450 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.450 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.450 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.451 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.451 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:53.451 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.708 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:53.708 "name": "raid_bdev1", 00:28:53.708 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:53.708 "strip_size_kb": 0, 00:28:53.708 "state": "online", 00:28:53.708 "raid_level": "raid1", 00:28:53.708 "superblock": true, 00:28:53.708 "num_base_bdevs": 4, 00:28:53.708 "num_base_bdevs_discovered": 3, 00:28:53.708 "num_base_bdevs_operational": 3, 00:28:53.708 "process": { 00:28:53.708 "type": "rebuild", 00:28:53.708 "target": "spare", 00:28:53.708 "progress": { 00:28:53.708 "blocks": 38912, 00:28:53.708 "percent": 61 00:28:53.708 } 00:28:53.708 }, 00:28:53.708 "base_bdevs_list": [ 00:28:53.708 { 00:28:53.708 "name": "spare", 00:28:53.708 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:53.708 "is_configured": true, 00:28:53.708 "data_offset": 2048, 00:28:53.708 "data_size": 63488 00:28:53.708 }, 00:28:53.708 { 00:28:53.708 "name": null, 00:28:53.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:53.708 "is_configured": false, 00:28:53.708 "data_offset": 2048, 00:28:53.708 "data_size": 63488 00:28:53.708 }, 00:28:53.708 { 00:28:53.708 "name": "BaseBdev3", 00:28:53.708 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:53.708 "is_configured": true, 00:28:53.708 "data_offset": 2048, 00:28:53.708 "data_size": 63488 00:28:53.708 }, 00:28:53.708 { 00:28:53.708 "name": "BaseBdev4", 00:28:53.708 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:53.708 "is_configured": true, 00:28:53.708 "data_offset": 2048, 00:28:53.708 "data_size": 63488 00:28:53.708 } 00:28:53.708 ] 00:28:53.708 }' 00:28:53.708 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:53.708 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:53.708 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=937 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:53.966 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:54.224 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:54.224 "name": "raid_bdev1", 00:28:54.224 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:54.224 "strip_size_kb": 0, 00:28:54.224 "state": "online", 00:28:54.224 "raid_level": "raid1", 00:28:54.224 "superblock": true, 00:28:54.224 "num_base_bdevs": 4, 00:28:54.224 "num_base_bdevs_discovered": 3, 00:28:54.224 "num_base_bdevs_operational": 3, 00:28:54.224 "process": { 00:28:54.224 "type": "rebuild", 00:28:54.224 "target": "spare", 00:28:54.224 "progress": { 00:28:54.224 "blocks": 45056, 00:28:54.224 "percent": 70 00:28:54.224 } 00:28:54.224 }, 00:28:54.224 "base_bdevs_list": [ 00:28:54.224 { 00:28:54.224 "name": "spare", 00:28:54.224 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:54.224 "is_configured": true, 00:28:54.224 "data_offset": 2048, 00:28:54.224 "data_size": 63488 00:28:54.224 }, 00:28:54.224 { 00:28:54.224 "name": null, 00:28:54.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:54.224 "is_configured": false, 00:28:54.224 "data_offset": 2048, 00:28:54.224 "data_size": 63488 00:28:54.224 }, 00:28:54.224 { 00:28:54.224 "name": "BaseBdev3", 00:28:54.224 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:54.224 "is_configured": true, 00:28:54.224 "data_offset": 2048, 00:28:54.224 "data_size": 63488 00:28:54.224 }, 00:28:54.224 { 00:28:54.224 "name": "BaseBdev4", 00:28:54.224 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:54.224 "is_configured": true, 00:28:54.224 "data_offset": 2048, 00:28:54.224 "data_size": 63488 00:28:54.224 } 00:28:54.224 ] 00:28:54.224 }' 00:28:54.224 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:54.224 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:28:54.224 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:54.224 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:28:54.224 12:10:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:28:55.157 [2024-07-21 12:10:53.685115] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:28:55.157 [2024-07-21 12:10:53.685221] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:28:55.157 [2024-07-21 12:10:53.685418] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.157 12:10:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:55.414 "name": "raid_bdev1", 00:28:55.414 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:55.414 "strip_size_kb": 0, 00:28:55.414 "state": "online", 00:28:55.414 "raid_level": "raid1", 00:28:55.414 "superblock": true, 00:28:55.414 "num_base_bdevs": 4, 00:28:55.414 "num_base_bdevs_discovered": 3, 00:28:55.414 "num_base_bdevs_operational": 3, 00:28:55.414 "base_bdevs_list": [ 00:28:55.414 { 00:28:55.414 "name": "spare", 00:28:55.414 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:55.414 "is_configured": true, 00:28:55.414 "data_offset": 2048, 00:28:55.414 "data_size": 63488 00:28:55.414 }, 00:28:55.414 { 00:28:55.414 "name": null, 00:28:55.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.414 "is_configured": false, 00:28:55.414 "data_offset": 2048, 00:28:55.414 "data_size": 63488 00:28:55.414 }, 00:28:55.414 { 00:28:55.414 "name": "BaseBdev3", 00:28:55.414 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:55.414 "is_configured": true, 00:28:55.414 "data_offset": 2048, 00:28:55.414 "data_size": 63488 00:28:55.414 }, 00:28:55.414 { 00:28:55.414 "name": "BaseBdev4", 00:28:55.414 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:55.414 "is_configured": true, 00:28:55.414 "data_offset": 2048, 00:28:55.414 "data_size": 63488 00:28:55.414 } 00:28:55.414 ] 00:28:55.414 }' 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:55.414 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.671 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:55.671 "name": "raid_bdev1", 00:28:55.672 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:55.672 "strip_size_kb": 0, 00:28:55.672 "state": "online", 00:28:55.672 "raid_level": "raid1", 00:28:55.672 "superblock": true, 00:28:55.672 "num_base_bdevs": 4, 00:28:55.672 "num_base_bdevs_discovered": 3, 00:28:55.672 "num_base_bdevs_operational": 3, 00:28:55.672 "base_bdevs_list": [ 00:28:55.672 { 00:28:55.672 "name": "spare", 00:28:55.672 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:55.672 "is_configured": true, 00:28:55.672 "data_offset": 2048, 00:28:55.672 "data_size": 63488 00:28:55.672 }, 00:28:55.672 { 00:28:55.672 "name": null, 00:28:55.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:55.672 "is_configured": false, 00:28:55.672 "data_offset": 2048, 00:28:55.672 "data_size": 63488 00:28:55.672 }, 00:28:55.672 { 00:28:55.672 "name": "BaseBdev3", 00:28:55.672 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:55.672 "is_configured": true, 00:28:55.672 "data_offset": 2048, 00:28:55.672 "data_size": 63488 00:28:55.672 }, 00:28:55.672 { 00:28:55.672 "name": "BaseBdev4", 00:28:55.672 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:55.672 "is_configured": true, 00:28:55.672 "data_offset": 2048, 00:28:55.672 "data_size": 63488 00:28:55.672 } 00:28:55.672 ] 00:28:55.672 }' 00:28:55.672 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:55.929 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:56.187 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:56.187 "name": "raid_bdev1", 00:28:56.187 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:56.187 "strip_size_kb": 0, 00:28:56.187 "state": "online", 00:28:56.187 "raid_level": "raid1", 00:28:56.187 "superblock": true, 00:28:56.187 "num_base_bdevs": 4, 00:28:56.187 "num_base_bdevs_discovered": 3, 00:28:56.187 "num_base_bdevs_operational": 3, 00:28:56.187 "base_bdevs_list": [ 00:28:56.187 { 00:28:56.187 "name": "spare", 00:28:56.187 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:56.187 "is_configured": true, 00:28:56.187 "data_offset": 2048, 00:28:56.187 "data_size": 63488 00:28:56.187 }, 00:28:56.187 { 00:28:56.187 "name": null, 00:28:56.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:56.187 "is_configured": false, 00:28:56.187 "data_offset": 2048, 00:28:56.187 "data_size": 63488 00:28:56.187 }, 00:28:56.187 { 00:28:56.187 "name": "BaseBdev3", 00:28:56.187 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:56.187 "is_configured": true, 00:28:56.187 "data_offset": 2048, 00:28:56.187 "data_size": 63488 00:28:56.187 }, 00:28:56.187 { 00:28:56.187 "name": "BaseBdev4", 00:28:56.187 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:56.187 "is_configured": true, 00:28:56.187 "data_offset": 2048, 00:28:56.187 "data_size": 63488 00:28:56.187 } 00:28:56.187 ] 00:28:56.187 }' 00:28:56.187 12:10:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:56.187 12:10:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.752 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:28:57.010 [2024-07-21 12:10:55.708249] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:57.010 [2024-07-21 12:10:55.708318] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:57.010 [2024-07-21 12:10:55.708467] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:57.010 [2024-07-21 12:10:55.708582] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:57.010 [2024-07-21 12:10:55.708598] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:28:57.010 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:28:57.010 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:57.268 12:10:55 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:28:57.527 /dev/nbd0 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:57.527 1+0 records in 00:28:57.527 1+0 records out 00:28:57.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601155 s, 6.8 MB/s 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:57.527 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:28:57.785 /dev/nbd1 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:28:57.785 1+0 records in 00:28:57.785 1+0 records out 00:28:57.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649041 s, 6.3 MB/s 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:57.785 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:28:58.042 12:10:56 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:28:58.300 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:28:58.558 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:28:58.817 [2024-07-21 12:10:57.557232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:28:58.817 [2024-07-21 12:10:57.557376] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:58.817 [2024-07-21 12:10:57.557421] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:28:58.817 [2024-07-21 12:10:57.557451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:58.817 [2024-07-21 12:10:57.560061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:58.817 [2024-07-21 12:10:57.560117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:28:58.817 [2024-07-21 12:10:57.560230] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:28:58.817 [2024-07-21 12:10:57.560287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:28:58.817 [2024-07-21 12:10:57.560455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:58.817 [2024-07-21 12:10:57.560580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:28:58.817 spare 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:58.817 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:58.817 [2024-07-21 12:10:57.660692] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:28:58.817 [2024-07-21 12:10:57.660795] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:58.817 [2024-07-21 12:10:57.661046] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:28:58.817 [2024-07-21 12:10:57.661640] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:28:58.817 [2024-07-21 12:10:57.661664] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:28:58.817 [2024-07-21 12:10:57.661883] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.077 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:28:59.077 "name": "raid_bdev1", 00:28:59.077 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:59.077 "strip_size_kb": 0, 00:28:59.077 "state": "online", 00:28:59.077 "raid_level": "raid1", 00:28:59.077 "superblock": true, 00:28:59.077 "num_base_bdevs": 4, 00:28:59.077 "num_base_bdevs_discovered": 3, 00:28:59.077 "num_base_bdevs_operational": 3, 00:28:59.077 "base_bdevs_list": [ 00:28:59.077 { 00:28:59.077 "name": "spare", 00:28:59.077 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:59.077 "is_configured": true, 00:28:59.077 "data_offset": 2048, 00:28:59.077 "data_size": 63488 00:28:59.077 }, 00:28:59.077 { 00:28:59.077 "name": null, 00:28:59.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.077 "is_configured": false, 00:28:59.077 "data_offset": 2048, 00:28:59.077 "data_size": 63488 00:28:59.077 }, 00:28:59.077 { 00:28:59.077 "name": "BaseBdev3", 00:28:59.077 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:59.077 "is_configured": true, 00:28:59.077 "data_offset": 2048, 00:28:59.077 "data_size": 63488 00:28:59.077 }, 00:28:59.077 { 00:28:59.077 "name": "BaseBdev4", 00:28:59.077 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:59.077 "is_configured": true, 00:28:59.077 "data_offset": 2048, 00:28:59.077 "data_size": 63488 00:28:59.077 } 00:28:59.077 ] 00:28:59.077 }' 00:28:59.077 12:10:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:28:59.077 12:10:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:59.644 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:28:59.644 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:28:59.644 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:28:59.645 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:28:59.645 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:28:59.645 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:28:59.645 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.904 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:59.904 "name": "raid_bdev1", 00:28:59.904 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:28:59.904 "strip_size_kb": 0, 00:28:59.904 "state": "online", 00:28:59.904 "raid_level": "raid1", 00:28:59.904 "superblock": true, 00:28:59.904 "num_base_bdevs": 4, 00:28:59.904 "num_base_bdevs_discovered": 3, 00:28:59.904 "num_base_bdevs_operational": 3, 00:28:59.904 "base_bdevs_list": [ 00:28:59.904 { 00:28:59.904 "name": "spare", 00:28:59.904 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:28:59.904 "is_configured": true, 00:28:59.904 "data_offset": 2048, 00:28:59.904 "data_size": 63488 00:28:59.904 }, 00:28:59.904 { 00:28:59.904 "name": null, 00:28:59.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:59.904 "is_configured": false, 00:28:59.904 "data_offset": 2048, 00:28:59.904 "data_size": 63488 00:28:59.904 }, 00:28:59.904 { 00:28:59.904 "name": "BaseBdev3", 00:28:59.904 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:28:59.904 "is_configured": true, 00:28:59.904 "data_offset": 2048, 00:28:59.904 "data_size": 63488 00:28:59.904 }, 00:28:59.904 { 00:28:59.904 "name": "BaseBdev4", 00:28:59.904 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:28:59.904 "is_configured": true, 00:28:59.904 "data_offset": 2048, 00:28:59.904 "data_size": 63488 00:28:59.904 } 00:28:59.904 ] 00:28:59.904 }' 00:28:59.904 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:28:59.904 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:28:59.904 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:00.162 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:00.162 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.162 12:10:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:00.423 [2024-07-21 12:10:59.238265] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:00.423 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.689 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:00.689 "name": "raid_bdev1", 00:29:00.689 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:00.689 "strip_size_kb": 0, 00:29:00.689 "state": "online", 00:29:00.689 "raid_level": "raid1", 00:29:00.689 "superblock": true, 00:29:00.689 "num_base_bdevs": 4, 00:29:00.689 "num_base_bdevs_discovered": 2, 00:29:00.689 "num_base_bdevs_operational": 2, 00:29:00.689 "base_bdevs_list": [ 00:29:00.689 { 00:29:00.689 "name": null, 00:29:00.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.689 "is_configured": false, 00:29:00.689 "data_offset": 2048, 00:29:00.689 "data_size": 63488 00:29:00.689 }, 00:29:00.689 { 00:29:00.689 "name": null, 00:29:00.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:00.689 "is_configured": false, 00:29:00.689 "data_offset": 2048, 00:29:00.689 "data_size": 63488 00:29:00.689 }, 00:29:00.689 { 00:29:00.689 "name": "BaseBdev3", 00:29:00.689 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:00.689 "is_configured": true, 00:29:00.689 "data_offset": 2048, 00:29:00.689 "data_size": 63488 00:29:00.689 }, 00:29:00.689 { 00:29:00.689 "name": "BaseBdev4", 00:29:00.689 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:00.689 "is_configured": true, 00:29:00.689 "data_offset": 2048, 00:29:00.689 "data_size": 63488 00:29:00.689 } 00:29:00.689 ] 00:29:00.689 }' 00:29:00.689 12:10:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:00.689 12:10:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:01.622 12:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:01.622 [2024-07-21 12:11:00.378567] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:01.623 [2024-07-21 12:11:00.378860] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:01.623 [2024-07-21 12:11:00.378878] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:01.623 [2024-07-21 12:11:00.378980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:01.623 [2024-07-21 12:11:00.384292] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:29:01.623 [2024-07-21 12:11:00.386496] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:01.623 12:11:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:02.559 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:02.559 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:02.559 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:02.559 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:02.559 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:02.559 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:02.559 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.818 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:02.818 "name": "raid_bdev1", 00:29:02.818 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:02.818 "strip_size_kb": 0, 00:29:02.818 "state": "online", 00:29:02.818 "raid_level": "raid1", 00:29:02.818 "superblock": true, 00:29:02.818 "num_base_bdevs": 4, 00:29:02.818 "num_base_bdevs_discovered": 3, 00:29:02.818 "num_base_bdevs_operational": 3, 00:29:02.818 "process": { 00:29:02.818 "type": "rebuild", 00:29:02.818 "target": "spare", 00:29:02.818 "progress": { 00:29:02.818 "blocks": 24576, 00:29:02.818 "percent": 38 00:29:02.818 } 00:29:02.818 }, 00:29:02.818 "base_bdevs_list": [ 00:29:02.818 { 00:29:02.818 "name": "spare", 00:29:02.818 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:29:02.818 "is_configured": true, 00:29:02.818 "data_offset": 2048, 00:29:02.818 "data_size": 63488 00:29:02.818 }, 00:29:02.818 { 00:29:02.818 "name": null, 00:29:02.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.818 "is_configured": false, 00:29:02.818 "data_offset": 2048, 00:29:02.818 "data_size": 63488 00:29:02.818 }, 00:29:02.818 { 00:29:02.818 "name": "BaseBdev3", 00:29:02.818 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:02.818 "is_configured": true, 00:29:02.818 "data_offset": 2048, 00:29:02.818 "data_size": 63488 00:29:02.818 }, 00:29:02.818 { 00:29:02.818 "name": "BaseBdev4", 00:29:02.818 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:02.818 "is_configured": true, 00:29:02.818 "data_offset": 2048, 00:29:02.818 "data_size": 63488 00:29:02.818 } 00:29:02.818 ] 00:29:02.818 }' 00:29:02.818 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:03.077 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:03.077 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:03.077 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:03.077 12:11:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:03.336 [2024-07-21 12:11:01.968879] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.336 [2024-07-21 12:11:01.998012] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:03.336 [2024-07-21 12:11:01.998129] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:03.336 [2024-07-21 12:11:01.998152] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:03.336 [2024-07-21 12:11:01.998162] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:03.336 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.595 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:03.595 "name": "raid_bdev1", 00:29:03.595 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:03.595 "strip_size_kb": 0, 00:29:03.595 "state": "online", 00:29:03.595 "raid_level": "raid1", 00:29:03.595 "superblock": true, 00:29:03.595 "num_base_bdevs": 4, 00:29:03.595 "num_base_bdevs_discovered": 2, 00:29:03.595 "num_base_bdevs_operational": 2, 00:29:03.595 "base_bdevs_list": [ 00:29:03.595 { 00:29:03.595 "name": null, 00:29:03.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.595 "is_configured": false, 00:29:03.595 "data_offset": 2048, 00:29:03.595 "data_size": 63488 00:29:03.595 }, 00:29:03.595 { 00:29:03.595 "name": null, 00:29:03.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.595 "is_configured": false, 00:29:03.595 "data_offset": 2048, 00:29:03.595 "data_size": 63488 00:29:03.595 }, 00:29:03.595 { 00:29:03.595 "name": "BaseBdev3", 00:29:03.595 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:03.595 "is_configured": true, 00:29:03.595 "data_offset": 2048, 00:29:03.595 "data_size": 63488 00:29:03.595 }, 00:29:03.595 { 00:29:03.595 "name": "BaseBdev4", 00:29:03.595 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:03.595 "is_configured": true, 00:29:03.595 "data_offset": 2048, 00:29:03.595 "data_size": 63488 00:29:03.595 } 00:29:03.595 ] 00:29:03.595 }' 00:29:03.595 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:03.595 12:11:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:04.163 12:11:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:04.422 [2024-07-21 12:11:03.073212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:04.422 [2024-07-21 12:11:03.073356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.422 [2024-07-21 12:11:03.073414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:29:04.422 [2024-07-21 12:11:03.073441] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.422 [2024-07-21 12:11:03.074029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.422 [2024-07-21 12:11:03.074071] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:04.422 [2024-07-21 12:11:03.074216] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:04.422 [2024-07-21 12:11:03.074234] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:04.422 [2024-07-21 12:11:03.074243] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:04.422 [2024-07-21 12:11:03.074297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:04.422 [2024-07-21 12:11:03.079600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2160 00:29:04.422 spare 00:29:04.422 [2024-07-21 12:11:03.081777] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:04.422 12:11:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:05.352 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:05.352 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:05.352 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:05.353 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:05.353 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:05.353 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.353 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:05.609 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:05.609 "name": "raid_bdev1", 00:29:05.609 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:05.609 "strip_size_kb": 0, 00:29:05.609 "state": "online", 00:29:05.609 "raid_level": "raid1", 00:29:05.610 "superblock": true, 00:29:05.610 "num_base_bdevs": 4, 00:29:05.610 "num_base_bdevs_discovered": 3, 00:29:05.610 "num_base_bdevs_operational": 3, 00:29:05.610 "process": { 00:29:05.610 "type": "rebuild", 00:29:05.610 "target": "spare", 00:29:05.610 "progress": { 00:29:05.610 "blocks": 24576, 00:29:05.610 "percent": 38 00:29:05.610 } 00:29:05.610 }, 00:29:05.610 "base_bdevs_list": [ 00:29:05.610 { 00:29:05.610 "name": "spare", 00:29:05.610 "uuid": "17e555a6-9ee8-5f39-852e-c4c5a5dbd54f", 00:29:05.610 "is_configured": true, 00:29:05.610 "data_offset": 2048, 00:29:05.610 "data_size": 63488 00:29:05.610 }, 00:29:05.610 { 00:29:05.610 "name": null, 00:29:05.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:05.610 "is_configured": false, 00:29:05.610 "data_offset": 2048, 00:29:05.610 "data_size": 63488 00:29:05.610 }, 00:29:05.610 { 00:29:05.610 "name": "BaseBdev3", 00:29:05.610 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:05.610 "is_configured": true, 00:29:05.610 "data_offset": 2048, 00:29:05.610 "data_size": 63488 00:29:05.610 }, 00:29:05.610 { 00:29:05.610 "name": "BaseBdev4", 00:29:05.610 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:05.610 "is_configured": true, 00:29:05.610 "data_offset": 2048, 00:29:05.610 "data_size": 63488 00:29:05.610 } 00:29:05.610 ] 00:29:05.610 }' 00:29:05.610 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:05.610 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:05.610 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:05.610 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:05.610 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:05.868 [2024-07-21 12:11:04.676985] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:05.868 [2024-07-21 12:11:04.692367] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:05.868 [2024-07-21 12:11:04.692454] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:05.868 [2024-07-21 12:11:04.692473] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:05.868 [2024-07-21 12:11:04.692482] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:05.868 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:06.127 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:06.127 "name": "raid_bdev1", 00:29:06.127 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:06.127 "strip_size_kb": 0, 00:29:06.127 "state": "online", 00:29:06.127 "raid_level": "raid1", 00:29:06.127 "superblock": true, 00:29:06.127 "num_base_bdevs": 4, 00:29:06.127 "num_base_bdevs_discovered": 2, 00:29:06.127 "num_base_bdevs_operational": 2, 00:29:06.127 "base_bdevs_list": [ 00:29:06.127 { 00:29:06.127 "name": null, 00:29:06.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.127 "is_configured": false, 00:29:06.127 "data_offset": 2048, 00:29:06.127 "data_size": 63488 00:29:06.127 }, 00:29:06.127 { 00:29:06.127 "name": null, 00:29:06.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:06.127 "is_configured": false, 00:29:06.127 "data_offset": 2048, 00:29:06.127 "data_size": 63488 00:29:06.127 }, 00:29:06.127 { 00:29:06.127 "name": "BaseBdev3", 00:29:06.127 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:06.127 "is_configured": true, 00:29:06.127 "data_offset": 2048, 00:29:06.127 "data_size": 63488 00:29:06.127 }, 00:29:06.127 { 00:29:06.127 "name": "BaseBdev4", 00:29:06.127 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:06.127 "is_configured": true, 00:29:06.127 "data_offset": 2048, 00:29:06.127 "data_size": 63488 00:29:06.127 } 00:29:06.127 ] 00:29:06.127 }' 00:29:06.127 12:11:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:06.127 12:11:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.062 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:07.062 "name": "raid_bdev1", 00:29:07.062 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:07.062 "strip_size_kb": 0, 00:29:07.062 "state": "online", 00:29:07.062 "raid_level": "raid1", 00:29:07.062 "superblock": true, 00:29:07.062 "num_base_bdevs": 4, 00:29:07.062 "num_base_bdevs_discovered": 2, 00:29:07.062 "num_base_bdevs_operational": 2, 00:29:07.062 "base_bdevs_list": [ 00:29:07.062 { 00:29:07.062 "name": null, 00:29:07.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.063 "is_configured": false, 00:29:07.063 "data_offset": 2048, 00:29:07.063 "data_size": 63488 00:29:07.063 }, 00:29:07.063 { 00:29:07.063 "name": null, 00:29:07.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:07.063 "is_configured": false, 00:29:07.063 "data_offset": 2048, 00:29:07.063 "data_size": 63488 00:29:07.063 }, 00:29:07.063 { 00:29:07.063 "name": "BaseBdev3", 00:29:07.063 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:07.063 "is_configured": true, 00:29:07.063 "data_offset": 2048, 00:29:07.063 "data_size": 63488 00:29:07.063 }, 00:29:07.063 { 00:29:07.063 "name": "BaseBdev4", 00:29:07.063 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:07.063 "is_configured": true, 00:29:07.063 "data_offset": 2048, 00:29:07.063 "data_size": 63488 00:29:07.063 } 00:29:07.063 ] 00:29:07.063 }' 00:29:07.063 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:07.063 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:07.063 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:07.321 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:07.321 12:11:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:29:07.321 12:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:07.607 [2024-07-21 12:11:06.365741] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:07.608 [2024-07-21 12:11:06.365905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.608 [2024-07-21 12:11:06.365973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:29:07.608 [2024-07-21 12:11:06.366021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.608 [2024-07-21 12:11:06.366609] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.608 [2024-07-21 12:11:06.366664] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:07.608 [2024-07-21 12:11:06.366777] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:29:07.608 [2024-07-21 12:11:06.366794] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:07.608 [2024-07-21 12:11:06.366802] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:07.608 BaseBdev1 00:29:07.608 12:11:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:08.543 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.802 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:08.802 "name": "raid_bdev1", 00:29:08.802 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:08.802 "strip_size_kb": 0, 00:29:08.802 "state": "online", 00:29:08.802 "raid_level": "raid1", 00:29:08.802 "superblock": true, 00:29:08.802 "num_base_bdevs": 4, 00:29:08.802 "num_base_bdevs_discovered": 2, 00:29:08.802 "num_base_bdevs_operational": 2, 00:29:08.802 "base_bdevs_list": [ 00:29:08.802 { 00:29:08.802 "name": null, 00:29:08.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.802 "is_configured": false, 00:29:08.802 "data_offset": 2048, 00:29:08.802 "data_size": 63488 00:29:08.802 }, 00:29:08.802 { 00:29:08.802 "name": null, 00:29:08.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:08.802 "is_configured": false, 00:29:08.802 "data_offset": 2048, 00:29:08.802 "data_size": 63488 00:29:08.802 }, 00:29:08.802 { 00:29:08.802 "name": "BaseBdev3", 00:29:08.802 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:08.802 "is_configured": true, 00:29:08.802 "data_offset": 2048, 00:29:08.802 "data_size": 63488 00:29:08.802 }, 00:29:08.802 { 00:29:08.802 "name": "BaseBdev4", 00:29:08.802 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:08.802 "is_configured": true, 00:29:08.802 "data_offset": 2048, 00:29:08.802 "data_size": 63488 00:29:08.802 } 00:29:08.802 ] 00:29:08.802 }' 00:29:08.802 12:11:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:08.802 12:11:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:09.737 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:09.737 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:09.737 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:09.737 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:09.737 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:09.737 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:09.737 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:09.738 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:09.738 "name": "raid_bdev1", 00:29:09.738 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:09.738 "strip_size_kb": 0, 00:29:09.738 "state": "online", 00:29:09.738 "raid_level": "raid1", 00:29:09.738 "superblock": true, 00:29:09.738 "num_base_bdevs": 4, 00:29:09.738 "num_base_bdevs_discovered": 2, 00:29:09.738 "num_base_bdevs_operational": 2, 00:29:09.738 "base_bdevs_list": [ 00:29:09.738 { 00:29:09.738 "name": null, 00:29:09.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.738 "is_configured": false, 00:29:09.738 "data_offset": 2048, 00:29:09.738 "data_size": 63488 00:29:09.738 }, 00:29:09.738 { 00:29:09.738 "name": null, 00:29:09.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:09.738 "is_configured": false, 00:29:09.738 "data_offset": 2048, 00:29:09.738 "data_size": 63488 00:29:09.738 }, 00:29:09.738 { 00:29:09.738 "name": "BaseBdev3", 00:29:09.738 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:09.738 "is_configured": true, 00:29:09.738 "data_offset": 2048, 00:29:09.738 "data_size": 63488 00:29:09.738 }, 00:29:09.738 { 00:29:09.738 "name": "BaseBdev4", 00:29:09.738 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:09.738 "is_configured": true, 00:29:09.738 "data_offset": 2048, 00:29:09.738 "data_size": 63488 00:29:09.738 } 00:29:09.738 ] 00:29:09.738 }' 00:29:09.738 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:29:09.996 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:29:10.254 [2024-07-21 12:11:08.918974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:10.254 [2024-07-21 12:11:08.919240] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:29:10.254 [2024-07-21 12:11:08.919257] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:29:10.254 request: 00:29:10.254 { 00:29:10.254 "raid_bdev": "raid_bdev1", 00:29:10.254 "base_bdev": "BaseBdev1", 00:29:10.254 "method": "bdev_raid_add_base_bdev", 00:29:10.254 "req_id": 1 00:29:10.254 } 00:29:10.254 Got JSON-RPC error response 00:29:10.254 response: 00:29:10.254 { 00:29:10.254 "code": -22, 00:29:10.254 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:29:10.254 } 00:29:10.254 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:29:10.254 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:10.254 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:10.254 12:11:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:10.254 12:11:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:11.189 12:11:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:11.448 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:11.448 "name": "raid_bdev1", 00:29:11.448 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:11.448 "strip_size_kb": 0, 00:29:11.448 "state": "online", 00:29:11.448 "raid_level": "raid1", 00:29:11.448 "superblock": true, 00:29:11.448 "num_base_bdevs": 4, 00:29:11.448 "num_base_bdevs_discovered": 2, 00:29:11.448 "num_base_bdevs_operational": 2, 00:29:11.448 "base_bdevs_list": [ 00:29:11.448 { 00:29:11.448 "name": null, 00:29:11.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.448 "is_configured": false, 00:29:11.448 "data_offset": 2048, 00:29:11.448 "data_size": 63488 00:29:11.448 }, 00:29:11.448 { 00:29:11.448 "name": null, 00:29:11.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:11.448 "is_configured": false, 00:29:11.448 "data_offset": 2048, 00:29:11.448 "data_size": 63488 00:29:11.448 }, 00:29:11.449 { 00:29:11.449 "name": "BaseBdev3", 00:29:11.449 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:11.449 "is_configured": true, 00:29:11.449 "data_offset": 2048, 00:29:11.449 "data_size": 63488 00:29:11.449 }, 00:29:11.449 { 00:29:11.449 "name": "BaseBdev4", 00:29:11.449 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:11.449 "is_configured": true, 00:29:11.449 "data_offset": 2048, 00:29:11.449 "data_size": 63488 00:29:11.449 } 00:29:11.449 ] 00:29:11.449 }' 00:29:11.449 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:11.449 12:11:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:12.015 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:12.015 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:12.015 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:12.015 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:12.015 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:12.015 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:12.015 12:11:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.274 12:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:12.274 "name": "raid_bdev1", 00:29:12.274 "uuid": "6a84eacc-47ef-47de-accc-8444f6cef77a", 00:29:12.274 "strip_size_kb": 0, 00:29:12.274 "state": "online", 00:29:12.274 "raid_level": "raid1", 00:29:12.274 "superblock": true, 00:29:12.274 "num_base_bdevs": 4, 00:29:12.274 "num_base_bdevs_discovered": 2, 00:29:12.274 "num_base_bdevs_operational": 2, 00:29:12.274 "base_bdevs_list": [ 00:29:12.274 { 00:29:12.274 "name": null, 00:29:12.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.274 "is_configured": false, 00:29:12.274 "data_offset": 2048, 00:29:12.274 "data_size": 63488 00:29:12.274 }, 00:29:12.274 { 00:29:12.274 "name": null, 00:29:12.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:12.274 "is_configured": false, 00:29:12.274 "data_offset": 2048, 00:29:12.274 "data_size": 63488 00:29:12.274 }, 00:29:12.274 { 00:29:12.274 "name": "BaseBdev3", 00:29:12.274 "uuid": "c6d34f6c-35b2-51df-b4b5-c83ea7e089ec", 00:29:12.274 "is_configured": true, 00:29:12.274 "data_offset": 2048, 00:29:12.274 "data_size": 63488 00:29:12.274 }, 00:29:12.274 { 00:29:12.274 "name": "BaseBdev4", 00:29:12.274 "uuid": "5a722f3e-ed43-59b1-ae8e-37753ccb02dc", 00:29:12.274 "is_configured": true, 00:29:12.274 "data_offset": 2048, 00:29:12.274 "data_size": 63488 00:29:12.274 } 00:29:12.274 ] 00:29:12.274 }' 00:29:12.274 12:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 157872 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 157872 ']' 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 157872 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 157872 00:29:12.533 killing process with pid 157872 00:29:12.533 Received shutdown signal, test time was about 60.000000 seconds 00:29:12.533 00:29:12.533 Latency(us) 00:29:12.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.533 =================================================================================================================== 00:29:12.533 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 157872' 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 157872 00:29:12.533 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 157872 00:29:12.533 [2024-07-21 12:11:11.231767] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:12.533 [2024-07-21 12:11:11.231971] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:12.533 [2024-07-21 12:11:11.232064] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:12.533 [2024-07-21 12:11:11.232078] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:29:12.533 [2024-07-21 12:11:11.292071] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:12.791 12:11:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:29:12.791 00:29:12.791 real 0m38.006s 00:29:12.791 user 0m57.085s 00:29:12.791 sys 0m5.533s 00:29:12.791 ************************************ 00:29:12.791 END TEST raid_rebuild_test_sb 00:29:12.791 ************************************ 00:29:12.791 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:12.791 12:11:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:13.049 12:11:11 bdev_raid -- bdev/bdev_raid.sh@879 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:29:13.049 12:11:11 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:29:13.049 12:11:11 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:13.049 12:11:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:13.049 ************************************ 00:29:13.049 START TEST raid_rebuild_test_io 00:29:13.049 ************************************ 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 false true true 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # raid_pid=158826 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 158826 /var/tmp/spdk-raid.sock 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@827 -- # '[' -z 158826 ']' 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:13.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:13.049 12:11:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:13.049 [2024-07-21 12:11:11.747310] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:13.049 [2024-07-21 12:11:11.748007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158826 ] 00:29:13.049 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:13.049 Zero copy mechanism will not be used. 00:29:13.049 [2024-07-21 12:11:11.901970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.308 [2024-07-21 12:11:12.003706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.308 [2024-07-21 12:11:12.075502] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:13.874 12:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:13.874 12:11:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # return 0 00:29:13.874 12:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:13.874 12:11:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:14.132 BaseBdev1_malloc 00:29:14.389 12:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:14.389 [2024-07-21 12:11:13.200870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:14.389 [2024-07-21 12:11:13.201006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.389 [2024-07-21 12:11:13.201059] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:29:14.389 [2024-07-21 12:11:13.201113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.389 [2024-07-21 12:11:13.203763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.389 [2024-07-21 12:11:13.203830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:14.389 BaseBdev1 00:29:14.389 12:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:14.389 12:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:14.646 BaseBdev2_malloc 00:29:14.646 12:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:14.903 [2024-07-21 12:11:13.630979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:14.903 [2024-07-21 12:11:13.631130] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:14.903 [2024-07-21 12:11:13.631219] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:14.903 [2024-07-21 12:11:13.631263] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:14.903 [2024-07-21 12:11:13.633819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:14.903 [2024-07-21 12:11:13.633870] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:14.903 BaseBdev2 00:29:14.903 12:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:14.903 12:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:15.160 BaseBdev3_malloc 00:29:15.161 12:11:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:15.417 [2024-07-21 12:11:14.098781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:15.417 [2024-07-21 12:11:14.098930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.417 [2024-07-21 12:11:14.098982] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:15.417 [2024-07-21 12:11:14.099066] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.417 [2024-07-21 12:11:14.101827] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.417 [2024-07-21 12:11:14.101890] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:15.417 BaseBdev3 00:29:15.417 12:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:15.417 12:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:15.675 BaseBdev4_malloc 00:29:15.675 12:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:15.675 [2024-07-21 12:11:14.533224] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:15.675 [2024-07-21 12:11:14.533380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:15.675 [2024-07-21 12:11:14.533423] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:15.675 [2024-07-21 12:11:14.533479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:15.675 [2024-07-21 12:11:14.536096] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:15.675 [2024-07-21 12:11:14.536151] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:15.675 BaseBdev4 00:29:15.933 12:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:15.933 spare_malloc 00:29:15.933 12:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:16.190 spare_delay 00:29:16.190 12:11:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:16.458 [2024-07-21 12:11:15.179881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:16.458 [2024-07-21 12:11:15.180036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:16.458 [2024-07-21 12:11:15.180080] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:16.458 [2024-07-21 12:11:15.180129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:16.458 [2024-07-21 12:11:15.182830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:16.458 [2024-07-21 12:11:15.182888] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:16.458 spare 00:29:16.458 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:16.756 [2024-07-21 12:11:15.396009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:16.756 [2024-07-21 12:11:15.398359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:16.756 [2024-07-21 12:11:15.398455] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:16.756 [2024-07-21 12:11:15.398527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:16.756 [2024-07-21 12:11:15.398688] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:29:16.756 [2024-07-21 12:11:15.398702] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:29:16.756 [2024-07-21 12:11:15.398897] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:16.756 [2024-07-21 12:11:15.399649] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:29:16.756 [2024-07-21 12:11:15.399803] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:29:16.756 [2024-07-21 12:11:15.400203] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:16.756 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:17.026 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:17.026 "name": "raid_bdev1", 00:29:17.026 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:17.026 "strip_size_kb": 0, 00:29:17.026 "state": "online", 00:29:17.026 "raid_level": "raid1", 00:29:17.026 "superblock": false, 00:29:17.026 "num_base_bdevs": 4, 00:29:17.026 "num_base_bdevs_discovered": 4, 00:29:17.026 "num_base_bdevs_operational": 4, 00:29:17.026 "base_bdevs_list": [ 00:29:17.026 { 00:29:17.026 "name": "BaseBdev1", 00:29:17.026 "uuid": "5efbfc0e-76e5-5774-8be1-3d44731f7a25", 00:29:17.026 "is_configured": true, 00:29:17.026 "data_offset": 0, 00:29:17.026 "data_size": 65536 00:29:17.026 }, 00:29:17.026 { 00:29:17.026 "name": "BaseBdev2", 00:29:17.026 "uuid": "2007218b-041f-5ace-8155-9448fee47dac", 00:29:17.026 "is_configured": true, 00:29:17.026 "data_offset": 0, 00:29:17.026 "data_size": 65536 00:29:17.026 }, 00:29:17.026 { 00:29:17.026 "name": "BaseBdev3", 00:29:17.026 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:17.026 "is_configured": true, 00:29:17.026 "data_offset": 0, 00:29:17.026 "data_size": 65536 00:29:17.026 }, 00:29:17.026 { 00:29:17.026 "name": "BaseBdev4", 00:29:17.026 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:17.026 "is_configured": true, 00:29:17.026 "data_offset": 0, 00:29:17.026 "data_size": 65536 00:29:17.026 } 00:29:17.026 ] 00:29:17.026 }' 00:29:17.026 12:11:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:17.026 12:11:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:17.593 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:17.593 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:17.852 [2024-07-21 12:11:16.472756] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:17.852 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=65536 00:29:17.852 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:17.852 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:17.852 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:29:17.852 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:17.852 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:17.852 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:18.110 [2024-07-21 12:11:16.807820] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:18.110 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:18.110 Zero copy mechanism will not be used. 00:29:18.110 Running I/O for 60 seconds... 00:29:18.110 [2024-07-21 12:11:16.939177] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:18.110 [2024-07-21 12:11:16.946241] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:18.111 12:11:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:18.679 12:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:18.679 "name": "raid_bdev1", 00:29:18.679 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:18.679 "strip_size_kb": 0, 00:29:18.679 "state": "online", 00:29:18.679 "raid_level": "raid1", 00:29:18.679 "superblock": false, 00:29:18.679 "num_base_bdevs": 4, 00:29:18.679 "num_base_bdevs_discovered": 3, 00:29:18.679 "num_base_bdevs_operational": 3, 00:29:18.679 "base_bdevs_list": [ 00:29:18.679 { 00:29:18.679 "name": null, 00:29:18.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.679 "is_configured": false, 00:29:18.679 "data_offset": 0, 00:29:18.679 "data_size": 65536 00:29:18.679 }, 00:29:18.679 { 00:29:18.679 "name": "BaseBdev2", 00:29:18.679 "uuid": "2007218b-041f-5ace-8155-9448fee47dac", 00:29:18.679 "is_configured": true, 00:29:18.679 "data_offset": 0, 00:29:18.679 "data_size": 65536 00:29:18.679 }, 00:29:18.679 { 00:29:18.679 "name": "BaseBdev3", 00:29:18.679 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:18.679 "is_configured": true, 00:29:18.679 "data_offset": 0, 00:29:18.679 "data_size": 65536 00:29:18.679 }, 00:29:18.679 { 00:29:18.679 "name": "BaseBdev4", 00:29:18.679 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:18.679 "is_configured": true, 00:29:18.679 "data_offset": 0, 00:29:18.679 "data_size": 65536 00:29:18.679 } 00:29:18.679 ] 00:29:18.679 }' 00:29:18.679 12:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:18.679 12:11:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:19.246 12:11:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:19.246 [2024-07-21 12:11:18.014346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:19.246 12:11:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:19.246 [2024-07-21 12:11:18.065749] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:19.246 [2024-07-21 12:11:18.068462] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:19.504 [2024-07-21 12:11:18.333764] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:19.504 [2024-07-21 12:11:18.334388] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:20.070 [2024-07-21 12:11:18.737857] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:20.329 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:20.329 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:20.329 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:20.329 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:20.329 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:20.329 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:20.329 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:20.588 [2024-07-21 12:11:19.223495] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:20.588 [2024-07-21 12:11:19.225463] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:20.588 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:20.588 "name": "raid_bdev1", 00:29:20.588 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:20.588 "strip_size_kb": 0, 00:29:20.588 "state": "online", 00:29:20.588 "raid_level": "raid1", 00:29:20.588 "superblock": false, 00:29:20.588 "num_base_bdevs": 4, 00:29:20.588 "num_base_bdevs_discovered": 4, 00:29:20.588 "num_base_bdevs_operational": 4, 00:29:20.588 "process": { 00:29:20.588 "type": "rebuild", 00:29:20.588 "target": "spare", 00:29:20.588 "progress": { 00:29:20.588 "blocks": 14336, 00:29:20.588 "percent": 21 00:29:20.588 } 00:29:20.588 }, 00:29:20.588 "base_bdevs_list": [ 00:29:20.588 { 00:29:20.588 "name": "spare", 00:29:20.588 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:20.588 "is_configured": true, 00:29:20.588 "data_offset": 0, 00:29:20.588 "data_size": 65536 00:29:20.588 }, 00:29:20.588 { 00:29:20.588 "name": "BaseBdev2", 00:29:20.588 "uuid": "2007218b-041f-5ace-8155-9448fee47dac", 00:29:20.588 "is_configured": true, 00:29:20.588 "data_offset": 0, 00:29:20.588 "data_size": 65536 00:29:20.588 }, 00:29:20.588 { 00:29:20.588 "name": "BaseBdev3", 00:29:20.588 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:20.588 "is_configured": true, 00:29:20.588 "data_offset": 0, 00:29:20.588 "data_size": 65536 00:29:20.588 }, 00:29:20.588 { 00:29:20.588 "name": "BaseBdev4", 00:29:20.588 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:20.588 "is_configured": true, 00:29:20.588 "data_offset": 0, 00:29:20.588 "data_size": 65536 00:29:20.588 } 00:29:20.588 ] 00:29:20.588 }' 00:29:20.588 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:20.588 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:20.588 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:20.588 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:20.588 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:20.847 [2024-07-21 12:11:19.652308] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:20.847 [2024-07-21 12:11:19.670495] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:21.105 [2024-07-21 12:11:19.770933] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:21.105 [2024-07-21 12:11:19.774661] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:21.105 [2024-07-21 12:11:19.774833] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:21.105 [2024-07-21 12:11:19.774879] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:21.105 [2024-07-21 12:11:19.797346] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.105 12:11:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:21.364 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:21.364 "name": "raid_bdev1", 00:29:21.364 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:21.364 "strip_size_kb": 0, 00:29:21.364 "state": "online", 00:29:21.364 "raid_level": "raid1", 00:29:21.364 "superblock": false, 00:29:21.364 "num_base_bdevs": 4, 00:29:21.364 "num_base_bdevs_discovered": 3, 00:29:21.364 "num_base_bdevs_operational": 3, 00:29:21.364 "base_bdevs_list": [ 00:29:21.364 { 00:29:21.364 "name": null, 00:29:21.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.364 "is_configured": false, 00:29:21.364 "data_offset": 0, 00:29:21.364 "data_size": 65536 00:29:21.364 }, 00:29:21.364 { 00:29:21.364 "name": "BaseBdev2", 00:29:21.364 "uuid": "2007218b-041f-5ace-8155-9448fee47dac", 00:29:21.364 "is_configured": true, 00:29:21.364 "data_offset": 0, 00:29:21.364 "data_size": 65536 00:29:21.364 }, 00:29:21.364 { 00:29:21.364 "name": "BaseBdev3", 00:29:21.364 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:21.364 "is_configured": true, 00:29:21.364 "data_offset": 0, 00:29:21.364 "data_size": 65536 00:29:21.364 }, 00:29:21.364 { 00:29:21.364 "name": "BaseBdev4", 00:29:21.364 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:21.364 "is_configured": true, 00:29:21.364 "data_offset": 0, 00:29:21.364 "data_size": 65536 00:29:21.364 } 00:29:21.364 ] 00:29:21.364 }' 00:29:21.364 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:21.364 12:11:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:21.932 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:21.932 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:21.932 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:21.932 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:21.932 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:21.932 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:21.932 12:11:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:22.190 12:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:22.190 "name": "raid_bdev1", 00:29:22.190 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:22.190 "strip_size_kb": 0, 00:29:22.190 "state": "online", 00:29:22.190 "raid_level": "raid1", 00:29:22.190 "superblock": false, 00:29:22.190 "num_base_bdevs": 4, 00:29:22.190 "num_base_bdevs_discovered": 3, 00:29:22.190 "num_base_bdevs_operational": 3, 00:29:22.190 "base_bdevs_list": [ 00:29:22.190 { 00:29:22.190 "name": null, 00:29:22.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.190 "is_configured": false, 00:29:22.190 "data_offset": 0, 00:29:22.190 "data_size": 65536 00:29:22.190 }, 00:29:22.190 { 00:29:22.190 "name": "BaseBdev2", 00:29:22.190 "uuid": "2007218b-041f-5ace-8155-9448fee47dac", 00:29:22.190 "is_configured": true, 00:29:22.190 "data_offset": 0, 00:29:22.190 "data_size": 65536 00:29:22.190 }, 00:29:22.190 { 00:29:22.190 "name": "BaseBdev3", 00:29:22.190 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:22.190 "is_configured": true, 00:29:22.190 "data_offset": 0, 00:29:22.190 "data_size": 65536 00:29:22.190 }, 00:29:22.190 { 00:29:22.190 "name": "BaseBdev4", 00:29:22.190 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:22.190 "is_configured": true, 00:29:22.190 "data_offset": 0, 00:29:22.190 "data_size": 65536 00:29:22.190 } 00:29:22.190 ] 00:29:22.190 }' 00:29:22.190 12:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:22.449 12:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:22.449 12:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:22.449 12:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:22.449 12:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:22.707 [2024-07-21 12:11:21.324611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:22.707 12:11:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:22.707 [2024-07-21 12:11:21.385126] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:22.707 [2024-07-21 12:11:21.387730] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:22.707 [2024-07-21 12:11:21.491070] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:22.707 [2024-07-21 12:11:21.492072] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:22.965 [2024-07-21 12:11:21.713090] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:22.965 [2024-07-21 12:11:21.713942] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:23.222 [2024-07-21 12:11:22.051190] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:23.481 [2024-07-21 12:11:22.289790] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:23.481 [2024-07-21 12:11:22.290518] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:23.739 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:23.739 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:23.739 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:23.739 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:23.739 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:23.739 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:23.739 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:23.739 [2024-07-21 12:11:22.540720] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:23.997 "name": "raid_bdev1", 00:29:23.997 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:23.997 "strip_size_kb": 0, 00:29:23.997 "state": "online", 00:29:23.997 "raid_level": "raid1", 00:29:23.997 "superblock": false, 00:29:23.997 "num_base_bdevs": 4, 00:29:23.997 "num_base_bdevs_discovered": 4, 00:29:23.997 "num_base_bdevs_operational": 4, 00:29:23.997 "process": { 00:29:23.997 "type": "rebuild", 00:29:23.997 "target": "spare", 00:29:23.997 "progress": { 00:29:23.997 "blocks": 14336, 00:29:23.997 "percent": 21 00:29:23.997 } 00:29:23.997 }, 00:29:23.997 "base_bdevs_list": [ 00:29:23.997 { 00:29:23.997 "name": "spare", 00:29:23.997 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:23.997 "is_configured": true, 00:29:23.997 "data_offset": 0, 00:29:23.997 "data_size": 65536 00:29:23.997 }, 00:29:23.997 { 00:29:23.997 "name": "BaseBdev2", 00:29:23.997 "uuid": "2007218b-041f-5ace-8155-9448fee47dac", 00:29:23.997 "is_configured": true, 00:29:23.997 "data_offset": 0, 00:29:23.997 "data_size": 65536 00:29:23.997 }, 00:29:23.997 { 00:29:23.997 "name": "BaseBdev3", 00:29:23.997 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:23.997 "is_configured": true, 00:29:23.997 "data_offset": 0, 00:29:23.997 "data_size": 65536 00:29:23.997 }, 00:29:23.997 { 00:29:23.997 "name": "BaseBdev4", 00:29:23.997 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:23.997 "is_configured": true, 00:29:23.997 "data_offset": 0, 00:29:23.997 "data_size": 65536 00:29:23.997 } 00:29:23.997 ] 00:29:23.997 }' 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:23.997 12:11:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:24.259 [2024-07-21 12:11:22.981935] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:24.259 [2024-07-21 12:11:23.050867] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:29:24.259 [2024-07-21 12:11:23.051083] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:29:24.259 [2024-07-21 12:11:23.059817] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:24.259 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:24.259 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:24.259 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:24.259 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.260 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:24.260 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:24.260 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.260 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.260 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.517 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.517 "name": "raid_bdev1", 00:29:24.517 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:24.517 "strip_size_kb": 0, 00:29:24.517 "state": "online", 00:29:24.517 "raid_level": "raid1", 00:29:24.517 "superblock": false, 00:29:24.517 "num_base_bdevs": 4, 00:29:24.517 "num_base_bdevs_discovered": 3, 00:29:24.517 "num_base_bdevs_operational": 3, 00:29:24.517 "process": { 00:29:24.517 "type": "rebuild", 00:29:24.517 "target": "spare", 00:29:24.517 "progress": { 00:29:24.517 "blocks": 26624, 00:29:24.517 "percent": 40 00:29:24.517 } 00:29:24.517 }, 00:29:24.517 "base_bdevs_list": [ 00:29:24.517 { 00:29:24.517 "name": "spare", 00:29:24.517 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:24.517 "is_configured": true, 00:29:24.517 "data_offset": 0, 00:29:24.517 "data_size": 65536 00:29:24.517 }, 00:29:24.517 { 00:29:24.517 "name": null, 00:29:24.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:24.517 "is_configured": false, 00:29:24.517 "data_offset": 0, 00:29:24.517 "data_size": 65536 00:29:24.517 }, 00:29:24.517 { 00:29:24.517 "name": "BaseBdev3", 00:29:24.517 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:24.517 "is_configured": true, 00:29:24.517 "data_offset": 0, 00:29:24.517 "data_size": 65536 00:29:24.517 }, 00:29:24.517 { 00:29:24.517 "name": "BaseBdev4", 00:29:24.517 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:24.517 "is_configured": true, 00:29:24.517 "data_offset": 0, 00:29:24.517 "data_size": 65536 00:29:24.517 } 00:29:24.517 ] 00:29:24.517 }' 00:29:24.517 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@705 -- # local timeout=968 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:24.775 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:24.775 [2024-07-21 12:11:23.623310] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:24.775 [2024-07-21 12:11:23.624418] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:25.032 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:25.032 "name": "raid_bdev1", 00:29:25.032 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:25.032 "strip_size_kb": 0, 00:29:25.032 "state": "online", 00:29:25.032 "raid_level": "raid1", 00:29:25.032 "superblock": false, 00:29:25.032 "num_base_bdevs": 4, 00:29:25.032 "num_base_bdevs_discovered": 3, 00:29:25.032 "num_base_bdevs_operational": 3, 00:29:25.032 "process": { 00:29:25.032 "type": "rebuild", 00:29:25.032 "target": "spare", 00:29:25.032 "progress": { 00:29:25.032 "blocks": 32768, 00:29:25.032 "percent": 50 00:29:25.032 } 00:29:25.032 }, 00:29:25.032 "base_bdevs_list": [ 00:29:25.032 { 00:29:25.032 "name": "spare", 00:29:25.032 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:25.032 "is_configured": true, 00:29:25.032 "data_offset": 0, 00:29:25.032 "data_size": 65536 00:29:25.032 }, 00:29:25.032 { 00:29:25.032 "name": null, 00:29:25.032 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:25.032 "is_configured": false, 00:29:25.032 "data_offset": 0, 00:29:25.032 "data_size": 65536 00:29:25.032 }, 00:29:25.032 { 00:29:25.032 "name": "BaseBdev3", 00:29:25.032 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:25.032 "is_configured": true, 00:29:25.033 "data_offset": 0, 00:29:25.033 "data_size": 65536 00:29:25.033 }, 00:29:25.033 { 00:29:25.033 "name": "BaseBdev4", 00:29:25.033 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:25.033 "is_configured": true, 00:29:25.033 "data_offset": 0, 00:29:25.033 "data_size": 65536 00:29:25.033 } 00:29:25.033 ] 00:29:25.033 }' 00:29:25.033 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:25.033 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:25.033 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:25.033 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:25.033 12:11:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:25.033 [2024-07-21 12:11:23.849565] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:29:25.290 [2024-07-21 12:11:24.072149] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:25.290 [2024-07-21 12:11:24.073346] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:29:25.548 [2024-07-21 12:11:24.304840] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:25.548 [2024-07-21 12:11:24.305339] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:26.114 12:11:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:26.372 12:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:26.372 "name": "raid_bdev1", 00:29:26.372 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:26.372 "strip_size_kb": 0, 00:29:26.372 "state": "online", 00:29:26.372 "raid_level": "raid1", 00:29:26.373 "superblock": false, 00:29:26.373 "num_base_bdevs": 4, 00:29:26.373 "num_base_bdevs_discovered": 3, 00:29:26.373 "num_base_bdevs_operational": 3, 00:29:26.373 "process": { 00:29:26.373 "type": "rebuild", 00:29:26.373 "target": "spare", 00:29:26.373 "progress": { 00:29:26.373 "blocks": 51200, 00:29:26.373 "percent": 78 00:29:26.373 } 00:29:26.373 }, 00:29:26.373 "base_bdevs_list": [ 00:29:26.373 { 00:29:26.373 "name": "spare", 00:29:26.373 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:26.373 "is_configured": true, 00:29:26.373 "data_offset": 0, 00:29:26.373 "data_size": 65536 00:29:26.373 }, 00:29:26.373 { 00:29:26.373 "name": null, 00:29:26.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:26.373 "is_configured": false, 00:29:26.373 "data_offset": 0, 00:29:26.373 "data_size": 65536 00:29:26.373 }, 00:29:26.373 { 00:29:26.373 "name": "BaseBdev3", 00:29:26.373 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:26.373 "is_configured": true, 00:29:26.373 "data_offset": 0, 00:29:26.373 "data_size": 65536 00:29:26.373 }, 00:29:26.373 { 00:29:26.373 "name": "BaseBdev4", 00:29:26.373 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:26.373 "is_configured": true, 00:29:26.373 "data_offset": 0, 00:29:26.373 "data_size": 65536 00:29:26.373 } 00:29:26.373 ] 00:29:26.373 }' 00:29:26.373 12:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:26.373 12:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:26.373 12:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:26.373 12:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:26.373 12:11:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:26.630 [2024-07-21 12:11:25.313013] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:29:27.197 [2024-07-21 12:11:25.851990] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:27.197 [2024-07-21 12:11:25.957832] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:27.197 [2024-07-21 12:11:25.960501] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.456 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:27.714 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:27.714 "name": "raid_bdev1", 00:29:27.714 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:27.714 "strip_size_kb": 0, 00:29:27.714 "state": "online", 00:29:27.714 "raid_level": "raid1", 00:29:27.714 "superblock": false, 00:29:27.714 "num_base_bdevs": 4, 00:29:27.714 "num_base_bdevs_discovered": 3, 00:29:27.714 "num_base_bdevs_operational": 3, 00:29:27.714 "base_bdevs_list": [ 00:29:27.714 { 00:29:27.714 "name": "spare", 00:29:27.714 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:27.714 "is_configured": true, 00:29:27.714 "data_offset": 0, 00:29:27.714 "data_size": 65536 00:29:27.714 }, 00:29:27.714 { 00:29:27.714 "name": null, 00:29:27.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.714 "is_configured": false, 00:29:27.714 "data_offset": 0, 00:29:27.714 "data_size": 65536 00:29:27.714 }, 00:29:27.714 { 00:29:27.714 "name": "BaseBdev3", 00:29:27.714 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:27.714 "is_configured": true, 00:29:27.714 "data_offset": 0, 00:29:27.714 "data_size": 65536 00:29:27.714 }, 00:29:27.714 { 00:29:27.714 "name": "BaseBdev4", 00:29:27.714 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:27.714 "is_configured": true, 00:29:27.714 "data_offset": 0, 00:29:27.714 "data_size": 65536 00:29:27.714 } 00:29:27.714 ] 00:29:27.714 }' 00:29:27.714 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:27.714 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:27.714 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # break 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:27.973 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:28.231 "name": "raid_bdev1", 00:29:28.231 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:28.231 "strip_size_kb": 0, 00:29:28.231 "state": "online", 00:29:28.231 "raid_level": "raid1", 00:29:28.231 "superblock": false, 00:29:28.231 "num_base_bdevs": 4, 00:29:28.231 "num_base_bdevs_discovered": 3, 00:29:28.231 "num_base_bdevs_operational": 3, 00:29:28.231 "base_bdevs_list": [ 00:29:28.231 { 00:29:28.231 "name": "spare", 00:29:28.231 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:28.231 "is_configured": true, 00:29:28.231 "data_offset": 0, 00:29:28.231 "data_size": 65536 00:29:28.231 }, 00:29:28.231 { 00:29:28.231 "name": null, 00:29:28.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.231 "is_configured": false, 00:29:28.231 "data_offset": 0, 00:29:28.231 "data_size": 65536 00:29:28.231 }, 00:29:28.231 { 00:29:28.231 "name": "BaseBdev3", 00:29:28.231 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:28.231 "is_configured": true, 00:29:28.231 "data_offset": 0, 00:29:28.231 "data_size": 65536 00:29:28.231 }, 00:29:28.231 { 00:29:28.231 "name": "BaseBdev4", 00:29:28.231 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:28.231 "is_configured": true, 00:29:28.231 "data_offset": 0, 00:29:28.231 "data_size": 65536 00:29:28.231 } 00:29:28.231 ] 00:29:28.231 }' 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:28.231 12:11:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:28.489 12:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:28.489 "name": "raid_bdev1", 00:29:28.489 "uuid": "b7088fbb-847a-47cb-b8f1-21ef6f2a560b", 00:29:28.489 "strip_size_kb": 0, 00:29:28.489 "state": "online", 00:29:28.489 "raid_level": "raid1", 00:29:28.489 "superblock": false, 00:29:28.489 "num_base_bdevs": 4, 00:29:28.489 "num_base_bdevs_discovered": 3, 00:29:28.489 "num_base_bdevs_operational": 3, 00:29:28.489 "base_bdevs_list": [ 00:29:28.489 { 00:29:28.489 "name": "spare", 00:29:28.489 "uuid": "7d0de39a-2b8d-5670-ac98-e370828758bc", 00:29:28.489 "is_configured": true, 00:29:28.489 "data_offset": 0, 00:29:28.489 "data_size": 65536 00:29:28.489 }, 00:29:28.489 { 00:29:28.489 "name": null, 00:29:28.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.489 "is_configured": false, 00:29:28.489 "data_offset": 0, 00:29:28.489 "data_size": 65536 00:29:28.489 }, 00:29:28.489 { 00:29:28.489 "name": "BaseBdev3", 00:29:28.489 "uuid": "6ad97cab-cc5f-5129-953f-4ffc6e55dcfa", 00:29:28.489 "is_configured": true, 00:29:28.489 "data_offset": 0, 00:29:28.489 "data_size": 65536 00:29:28.489 }, 00:29:28.489 { 00:29:28.489 "name": "BaseBdev4", 00:29:28.489 "uuid": "b322c615-9cb5-51f3-b762-fc5f03516342", 00:29:28.489 "is_configured": true, 00:29:28.489 "data_offset": 0, 00:29:28.489 "data_size": 65536 00:29:28.489 } 00:29:28.489 ] 00:29:28.489 }' 00:29:28.489 12:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:28.489 12:11:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:29.055 12:11:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:29.314 [2024-07-21 12:11:28.143046] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:29.314 [2024-07-21 12:11:28.143360] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:29.572 00:29:29.572 Latency(us) 00:29:29.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.572 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:29.572 raid_bdev1 : 11.43 97.71 293.12 0.00 0.00 14875.23 284.86 122016.12 00:29:29.572 =================================================================================================================== 00:29:29.572 Total : 97.71 293.12 0.00 0.00 14875.23 284.86 122016.12 00:29:29.572 [2024-07-21 12:11:28.247119] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:29.572 [2024-07-21 12:11:28.247317] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:29.572 0 00:29:29.572 [2024-07-21 12:11:28.247480] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:29.572 [2024-07-21 12:11:28.247498] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:29:29.572 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:29.572 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:29.829 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:29.830 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:30.086 /dev/nbd0 00:29:30.086 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:30.086 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:30.086 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:30.086 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:29:30.086 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:30.086 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:30.086 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.087 1+0 records in 00:29:30.087 1+0 records out 00:29:30.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045884 s, 8.9 MB/s 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.087 12:11:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:30.344 /dev/nbd1 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:30.344 1+0 records in 00:29:30.344 1+0 records out 00:29:30.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598344 s, 6.8 MB/s 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:29:30.344 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:30.345 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:30.909 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:31.167 /dev/nbd1 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@865 -- # local i 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # break 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:31.167 1+0 records in 00:29:31.167 1+0 records out 00:29:31.167 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417772 s, 9.8 MB/s 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@882 -- # size=4096 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # return 0 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.167 12:11:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:31.425 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@782 -- # killprocess 158826 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@946 -- # '[' -z 158826 ']' 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # kill -0 158826 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # uname 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 158826 00:29:31.683 killing process with pid 158826 00:29:31.683 Received shutdown signal, test time was about 13.609055 seconds 00:29:31.683 00:29:31.683 Latency(us) 00:29:31.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.683 =================================================================================================================== 00:29:31.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 158826' 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@965 -- # kill 158826 00:29:31.683 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # wait 158826 00:29:31.683 [2024-07-21 12:11:30.419807] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:31.683 [2024-07-21 12:11:30.475527] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:31.940 ************************************ 00:29:31.940 END TEST raid_rebuild_test_io 00:29:31.940 ************************************ 00:29:31.940 12:11:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # return 0 00:29:31.940 00:29:31.940 real 0m19.109s 00:29:31.940 user 0m30.738s 00:29:31.940 sys 0m2.445s 00:29:31.940 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:29:31.940 12:11:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.198 12:11:30 bdev_raid -- bdev/bdev_raid.sh@880 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:29:32.198 12:11:30 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:29:32.198 12:11:30 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:32.198 12:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:32.198 ************************************ 00:29:32.198 START TEST raid_rebuild_test_sb_io 00:29:32.198 ************************************ 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 4 true true true 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local background_io=true 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local verify=true 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local strip_size 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local create_arg 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local data_offset 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # raid_pid=159339 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # waitforlisten 159339 /var/tmp/spdk-raid.sock 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@827 -- # '[' -z 159339 ']' 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@832 -- # local max_retries=100 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:29:32.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # xtrace_disable 00:29:32.198 12:11:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:32.198 [2024-07-21 12:11:30.929853] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:29:32.198 [2024-07-21 12:11:30.930410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159339 ] 00:29:32.198 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:32.198 Zero copy mechanism will not be used. 00:29:32.455 [2024-07-21 12:11:31.097331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.455 [2024-07-21 12:11:31.205884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.455 [2024-07-21 12:11:31.278102] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:33.389 12:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:29:33.389 12:11:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # return 0 00:29:33.389 12:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:33.389 12:11:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:33.389 BaseBdev1_malloc 00:29:33.389 12:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:29:33.648 [2024-07-21 12:11:32.429067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:29:33.648 [2024-07-21 12:11:32.429521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:33.648 [2024-07-21 12:11:32.429619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:29:33.648 [2024-07-21 12:11:32.429875] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:33.648 [2024-07-21 12:11:32.432598] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:33.648 [2024-07-21 12:11:32.432815] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:33.648 BaseBdev1 00:29:33.648 12:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:33.648 12:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:33.927 BaseBdev2_malloc 00:29:33.927 12:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:29:34.198 [2024-07-21 12:11:32.882844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:29:34.198 [2024-07-21 12:11:32.883231] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.198 [2024-07-21 12:11:32.883344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:34.198 [2024-07-21 12:11:32.883628] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.198 [2024-07-21 12:11:32.886349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.198 [2024-07-21 12:11:32.886522] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:34.198 BaseBdev2 00:29:34.198 12:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:34.198 12:11:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:34.456 BaseBdev3_malloc 00:29:34.456 12:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:29:34.713 [2024-07-21 12:11:33.388712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:29:34.714 [2024-07-21 12:11:33.389100] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:34.714 [2024-07-21 12:11:33.389204] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:29:34.714 [2024-07-21 12:11:33.389589] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:34.714 [2024-07-21 12:11:33.392233] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:34.714 [2024-07-21 12:11:33.392413] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:34.714 BaseBdev3 00:29:34.714 12:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:29:34.714 12:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:34.972 BaseBdev4_malloc 00:29:34.972 12:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:29:35.229 [2024-07-21 12:11:33.875476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:29:35.229 [2024-07-21 12:11:33.875841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.229 [2024-07-21 12:11:33.876006] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:29:35.229 [2024-07-21 12:11:33.876178] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.229 [2024-07-21 12:11:33.878847] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.229 [2024-07-21 12:11:33.879046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:35.229 BaseBdev4 00:29:35.229 12:11:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:29:35.486 spare_malloc 00:29:35.486 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:29:35.486 spare_delay 00:29:35.744 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:35.744 [2024-07-21 12:11:34.589714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:35.744 [2024-07-21 12:11:34.590062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:35.744 [2024-07-21 12:11:34.590140] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:35.744 [2024-07-21 12:11:34.590432] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:35.744 [2024-07-21 12:11:34.593117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:35.744 [2024-07-21 12:11:34.593345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:35.744 spare 00:29:35.744 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:29:36.002 [2024-07-21 12:11:34.809896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:36.002 [2024-07-21 12:11:34.812223] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:36.002 [2024-07-21 12:11:34.812452] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:36.002 [2024-07-21 12:11:34.812650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:36.002 [2024-07-21 12:11:34.813126] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:29:36.002 [2024-07-21 12:11:34.813326] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:36.002 [2024-07-21 12:11:34.813531] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:29:36.002 [2024-07-21 12:11:34.814125] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:29:36.002 [2024-07-21 12:11:34.814289] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:29:36.002 [2024-07-21 12:11:34.814581] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:36.002 12:11:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:36.260 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:36.260 "name": "raid_bdev1", 00:29:36.260 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:36.260 "strip_size_kb": 0, 00:29:36.260 "state": "online", 00:29:36.260 "raid_level": "raid1", 00:29:36.260 "superblock": true, 00:29:36.260 "num_base_bdevs": 4, 00:29:36.260 "num_base_bdevs_discovered": 4, 00:29:36.260 "num_base_bdevs_operational": 4, 00:29:36.260 "base_bdevs_list": [ 00:29:36.260 { 00:29:36.260 "name": "BaseBdev1", 00:29:36.260 "uuid": "a7216cfb-85cd-52e5-a911-7b1e85812d66", 00:29:36.260 "is_configured": true, 00:29:36.260 "data_offset": 2048, 00:29:36.260 "data_size": 63488 00:29:36.260 }, 00:29:36.260 { 00:29:36.260 "name": "BaseBdev2", 00:29:36.260 "uuid": "01011d84-03b0-52a6-afeb-f715242f4827", 00:29:36.260 "is_configured": true, 00:29:36.260 "data_offset": 2048, 00:29:36.260 "data_size": 63488 00:29:36.260 }, 00:29:36.260 { 00:29:36.260 "name": "BaseBdev3", 00:29:36.260 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:36.260 "is_configured": true, 00:29:36.260 "data_offset": 2048, 00:29:36.260 "data_size": 63488 00:29:36.260 }, 00:29:36.260 { 00:29:36.260 "name": "BaseBdev4", 00:29:36.260 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:36.260 "is_configured": true, 00:29:36.260 "data_offset": 2048, 00:29:36.260 "data_size": 63488 00:29:36.260 } 00:29:36.260 ] 00:29:36.260 }' 00:29:36.260 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:36.260 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:36.840 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:29:36.840 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:29:37.097 [2024-07-21 12:11:35.859049] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:37.097 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=63488 00:29:37.097 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:29:37.097 12:11:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.355 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:29:37.355 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@620 -- # '[' true = true ']' 00:29:37.355 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:29:37.355 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@622 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:29:37.355 [2024-07-21 12:11:36.170092] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:37.355 I/O size of 3145728 is greater than zero copy threshold (65536). 00:29:37.355 Zero copy mechanism will not be used. 00:29:37.355 Running I/O for 60 seconds... 00:29:37.613 [2024-07-21 12:11:36.289892] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:37.613 [2024-07-21 12:11:36.298878] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:37.613 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:37.870 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:37.870 "name": "raid_bdev1", 00:29:37.870 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:37.870 "strip_size_kb": 0, 00:29:37.870 "state": "online", 00:29:37.870 "raid_level": "raid1", 00:29:37.870 "superblock": true, 00:29:37.870 "num_base_bdevs": 4, 00:29:37.870 "num_base_bdevs_discovered": 3, 00:29:37.870 "num_base_bdevs_operational": 3, 00:29:37.871 "base_bdevs_list": [ 00:29:37.871 { 00:29:37.871 "name": null, 00:29:37.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:37.871 "is_configured": false, 00:29:37.871 "data_offset": 2048, 00:29:37.871 "data_size": 63488 00:29:37.871 }, 00:29:37.871 { 00:29:37.871 "name": "BaseBdev2", 00:29:37.871 "uuid": "01011d84-03b0-52a6-afeb-f715242f4827", 00:29:37.871 "is_configured": true, 00:29:37.871 "data_offset": 2048, 00:29:37.871 "data_size": 63488 00:29:37.871 }, 00:29:37.871 { 00:29:37.871 "name": "BaseBdev3", 00:29:37.871 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:37.871 "is_configured": true, 00:29:37.871 "data_offset": 2048, 00:29:37.871 "data_size": 63488 00:29:37.871 }, 00:29:37.871 { 00:29:37.871 "name": "BaseBdev4", 00:29:37.871 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:37.871 "is_configured": true, 00:29:37.871 "data_offset": 2048, 00:29:37.871 "data_size": 63488 00:29:37.871 } 00:29:37.871 ] 00:29:37.871 }' 00:29:37.871 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:37.871 12:11:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:38.436 12:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:38.694 [2024-07-21 12:11:37.464384] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:38.694 12:11:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # sleep 1 00:29:38.694 [2024-07-21 12:11:37.518410] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:38.694 [2024-07-21 12:11:37.521069] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:38.951 [2024-07-21 12:11:37.659023] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:38.951 [2024-07-21 12:11:37.660694] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:39.208 [2024-07-21 12:11:37.883196] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:39.208 [2024-07-21 12:11:37.883862] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:39.467 [2024-07-21 12:11:38.226522] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:39.467 [2024-07-21 12:11:38.228401] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:39.725 [2024-07-21 12:11:38.474464] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:39.725 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:39.725 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:39.725 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:39.725 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:39.725 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:39.725 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:39.725 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.983 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:39.983 "name": "raid_bdev1", 00:29:39.983 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:39.983 "strip_size_kb": 0, 00:29:39.984 "state": "online", 00:29:39.984 "raid_level": "raid1", 00:29:39.984 "superblock": true, 00:29:39.984 "num_base_bdevs": 4, 00:29:39.984 "num_base_bdevs_discovered": 4, 00:29:39.984 "num_base_bdevs_operational": 4, 00:29:39.984 "process": { 00:29:39.984 "type": "rebuild", 00:29:39.984 "target": "spare", 00:29:39.984 "progress": { 00:29:39.984 "blocks": 12288, 00:29:39.984 "percent": 19 00:29:39.984 } 00:29:39.984 }, 00:29:39.984 "base_bdevs_list": [ 00:29:39.984 { 00:29:39.984 "name": "spare", 00:29:39.984 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:39.984 "is_configured": true, 00:29:39.984 "data_offset": 2048, 00:29:39.984 "data_size": 63488 00:29:39.984 }, 00:29:39.984 { 00:29:39.984 "name": "BaseBdev2", 00:29:39.984 "uuid": "01011d84-03b0-52a6-afeb-f715242f4827", 00:29:39.984 "is_configured": true, 00:29:39.984 "data_offset": 2048, 00:29:39.984 "data_size": 63488 00:29:39.984 }, 00:29:39.984 { 00:29:39.984 "name": "BaseBdev3", 00:29:39.984 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:39.984 "is_configured": true, 00:29:39.984 "data_offset": 2048, 00:29:39.984 "data_size": 63488 00:29:39.984 }, 00:29:39.984 { 00:29:39.984 "name": "BaseBdev4", 00:29:39.984 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:39.984 "is_configured": true, 00:29:39.984 "data_offset": 2048, 00:29:39.984 "data_size": 63488 00:29:39.984 } 00:29:39.984 ] 00:29:39.984 }' 00:29:39.984 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:40.242 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:40.242 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:40.242 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:40.242 12:11:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:40.242 [2024-07-21 12:11:39.090705] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:40.500 [2024-07-21 12:11:39.183172] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:40.500 [2024-07-21 12:11:39.185082] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:40.500 [2024-07-21 12:11:39.293573] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:40.500 [2024-07-21 12:11:39.310150] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:40.500 [2024-07-21 12:11:39.310562] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:40.500 [2024-07-21 12:11:39.310646] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:40.500 [2024-07-21 12:11:39.334138] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:40.500 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:40.758 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:40.758 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.758 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:40.758 "name": "raid_bdev1", 00:29:40.758 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:40.758 "strip_size_kb": 0, 00:29:40.758 "state": "online", 00:29:40.758 "raid_level": "raid1", 00:29:40.758 "superblock": true, 00:29:40.758 "num_base_bdevs": 4, 00:29:40.758 "num_base_bdevs_discovered": 3, 00:29:40.758 "num_base_bdevs_operational": 3, 00:29:40.758 "base_bdevs_list": [ 00:29:40.758 { 00:29:40.758 "name": null, 00:29:40.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:40.758 "is_configured": false, 00:29:40.758 "data_offset": 2048, 00:29:40.758 "data_size": 63488 00:29:40.758 }, 00:29:40.758 { 00:29:40.758 "name": "BaseBdev2", 00:29:40.758 "uuid": "01011d84-03b0-52a6-afeb-f715242f4827", 00:29:40.758 "is_configured": true, 00:29:40.758 "data_offset": 2048, 00:29:40.758 "data_size": 63488 00:29:40.758 }, 00:29:40.758 { 00:29:40.758 "name": "BaseBdev3", 00:29:40.758 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:40.758 "is_configured": true, 00:29:40.758 "data_offset": 2048, 00:29:40.758 "data_size": 63488 00:29:40.758 }, 00:29:40.758 { 00:29:40.758 "name": "BaseBdev4", 00:29:40.758 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:40.758 "is_configured": true, 00:29:40.758 "data_offset": 2048, 00:29:40.758 "data_size": 63488 00:29:40.758 } 00:29:40.758 ] 00:29:40.758 }' 00:29:40.758 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:40.758 12:11:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:41.693 "name": "raid_bdev1", 00:29:41.693 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:41.693 "strip_size_kb": 0, 00:29:41.693 "state": "online", 00:29:41.693 "raid_level": "raid1", 00:29:41.693 "superblock": true, 00:29:41.693 "num_base_bdevs": 4, 00:29:41.693 "num_base_bdevs_discovered": 3, 00:29:41.693 "num_base_bdevs_operational": 3, 00:29:41.693 "base_bdevs_list": [ 00:29:41.693 { 00:29:41.693 "name": null, 00:29:41.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:41.693 "is_configured": false, 00:29:41.693 "data_offset": 2048, 00:29:41.693 "data_size": 63488 00:29:41.693 }, 00:29:41.693 { 00:29:41.693 "name": "BaseBdev2", 00:29:41.693 "uuid": "01011d84-03b0-52a6-afeb-f715242f4827", 00:29:41.693 "is_configured": true, 00:29:41.693 "data_offset": 2048, 00:29:41.693 "data_size": 63488 00:29:41.693 }, 00:29:41.693 { 00:29:41.693 "name": "BaseBdev3", 00:29:41.693 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:41.693 "is_configured": true, 00:29:41.693 "data_offset": 2048, 00:29:41.693 "data_size": 63488 00:29:41.693 }, 00:29:41.693 { 00:29:41.693 "name": "BaseBdev4", 00:29:41.693 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:41.693 "is_configured": true, 00:29:41.693 "data_offset": 2048, 00:29:41.693 "data_size": 63488 00:29:41.693 } 00:29:41.693 ] 00:29:41.693 }' 00:29:41.693 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:41.952 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:41.952 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:41.952 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:41.952 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:42.211 [2024-07-21 12:11:40.911864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:42.211 12:11:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # sleep 1 00:29:42.211 [2024-07-21 12:11:40.964889] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:29:42.211 [2024-07-21 12:11:40.967355] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:42.211 [2024-07-21 12:11:41.075102] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:42.211 [2024-07-21 12:11:41.076663] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:29:42.779 [2024-07-21 12:11:41.355112] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:42.779 [2024-07-21 12:11:41.355761] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:29:42.779 [2024-07-21 12:11:41.587523] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:29:43.038 [2024-07-21 12:11:41.706854] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:43.038 [2024-07-21 12:11:41.707348] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:29:43.297 12:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:43.297 12:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:43.297 12:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:43.297 12:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:43.297 12:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:43.297 12:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:43.297 12:11:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:43.556 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:43.556 "name": "raid_bdev1", 00:29:43.556 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:43.556 "strip_size_kb": 0, 00:29:43.556 "state": "online", 00:29:43.556 "raid_level": "raid1", 00:29:43.556 "superblock": true, 00:29:43.556 "num_base_bdevs": 4, 00:29:43.556 "num_base_bdevs_discovered": 4, 00:29:43.556 "num_base_bdevs_operational": 4, 00:29:43.556 "process": { 00:29:43.556 "type": "rebuild", 00:29:43.556 "target": "spare", 00:29:43.557 "progress": { 00:29:43.557 "blocks": 18432, 00:29:43.557 "percent": 29 00:29:43.557 } 00:29:43.557 }, 00:29:43.557 "base_bdevs_list": [ 00:29:43.557 { 00:29:43.557 "name": "spare", 00:29:43.557 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:43.557 "is_configured": true, 00:29:43.557 "data_offset": 2048, 00:29:43.557 "data_size": 63488 00:29:43.557 }, 00:29:43.557 { 00:29:43.557 "name": "BaseBdev2", 00:29:43.557 "uuid": "01011d84-03b0-52a6-afeb-f715242f4827", 00:29:43.557 "is_configured": true, 00:29:43.557 "data_offset": 2048, 00:29:43.557 "data_size": 63488 00:29:43.557 }, 00:29:43.557 { 00:29:43.557 "name": "BaseBdev3", 00:29:43.557 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:43.557 "is_configured": true, 00:29:43.557 "data_offset": 2048, 00:29:43.557 "data_size": 63488 00:29:43.557 }, 00:29:43.557 { 00:29:43.557 "name": "BaseBdev4", 00:29:43.557 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:43.557 "is_configured": true, 00:29:43.557 "data_offset": 2048, 00:29:43.557 "data_size": 63488 00:29:43.557 } 00:29:43.557 ] 00:29:43.557 }' 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:43.557 [2024-07-21 12:11:42.299470] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:43.557 [2024-07-21 12:11:42.300387] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:29:43.557 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@692 -- # '[' 4 -gt 2 ']' 00:29:43.557 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@694 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:29:43.816 [2024-07-21 12:11:42.518984] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:29:43.816 [2024-07-21 12:11:42.590358] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:44.075 [2024-07-21 12:11:42.866348] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:29:44.075 [2024-07-21 12:11:42.866719] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006150 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@697 -- # base_bdevs[1]= 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # (( num_base_bdevs_operational-- )) 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@701 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.075 12:11:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.337 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.337 "name": "raid_bdev1", 00:29:44.337 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:44.337 "strip_size_kb": 0, 00:29:44.337 "state": "online", 00:29:44.337 "raid_level": "raid1", 00:29:44.337 "superblock": true, 00:29:44.337 "num_base_bdevs": 4, 00:29:44.337 "num_base_bdevs_discovered": 3, 00:29:44.337 "num_base_bdevs_operational": 3, 00:29:44.337 "process": { 00:29:44.337 "type": "rebuild", 00:29:44.337 "target": "spare", 00:29:44.337 "progress": { 00:29:44.337 "blocks": 26624, 00:29:44.337 "percent": 41 00:29:44.337 } 00:29:44.337 }, 00:29:44.337 "base_bdevs_list": [ 00:29:44.337 { 00:29:44.337 "name": "spare", 00:29:44.337 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:44.337 "is_configured": true, 00:29:44.337 "data_offset": 2048, 00:29:44.337 "data_size": 63488 00:29:44.337 }, 00:29:44.337 { 00:29:44.337 "name": null, 00:29:44.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.337 "is_configured": false, 00:29:44.337 "data_offset": 2048, 00:29:44.337 "data_size": 63488 00:29:44.337 }, 00:29:44.337 { 00:29:44.337 "name": "BaseBdev3", 00:29:44.337 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:44.337 "is_configured": true, 00:29:44.337 "data_offset": 2048, 00:29:44.337 "data_size": 63488 00:29:44.337 }, 00:29:44.337 { 00:29:44.337 "name": "BaseBdev4", 00:29:44.337 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:44.337 "is_configured": true, 00:29:44.337 "data_offset": 2048, 00:29:44.337 "data_size": 63488 00:29:44.337 } 00:29:44.337 ] 00:29:44.337 }' 00:29:44.337 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.337 [2024-07-21 12:11:43.136390] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:29:44.337 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.337 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.594 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.594 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@705 -- # local timeout=988 00:29:44.594 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:44.594 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:44.594 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:44.594 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:44.594 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:44.595 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:44.595 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:44.595 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:44.595 [2024-07-21 12:11:43.448646] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:29:44.852 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:44.852 "name": "raid_bdev1", 00:29:44.852 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:44.852 "strip_size_kb": 0, 00:29:44.852 "state": "online", 00:29:44.852 "raid_level": "raid1", 00:29:44.852 "superblock": true, 00:29:44.852 "num_base_bdevs": 4, 00:29:44.852 "num_base_bdevs_discovered": 3, 00:29:44.852 "num_base_bdevs_operational": 3, 00:29:44.852 "process": { 00:29:44.852 "type": "rebuild", 00:29:44.852 "target": "spare", 00:29:44.852 "progress": { 00:29:44.852 "blocks": 32768, 00:29:44.852 "percent": 51 00:29:44.852 } 00:29:44.852 }, 00:29:44.852 "base_bdevs_list": [ 00:29:44.852 { 00:29:44.852 "name": "spare", 00:29:44.852 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:44.852 "is_configured": true, 00:29:44.852 "data_offset": 2048, 00:29:44.852 "data_size": 63488 00:29:44.852 }, 00:29:44.852 { 00:29:44.852 "name": null, 00:29:44.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:44.852 "is_configured": false, 00:29:44.852 "data_offset": 2048, 00:29:44.852 "data_size": 63488 00:29:44.852 }, 00:29:44.852 { 00:29:44.852 "name": "BaseBdev3", 00:29:44.852 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:44.852 "is_configured": true, 00:29:44.852 "data_offset": 2048, 00:29:44.852 "data_size": 63488 00:29:44.853 }, 00:29:44.853 { 00:29:44.853 "name": "BaseBdev4", 00:29:44.853 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:44.853 "is_configured": true, 00:29:44.853 "data_offset": 2048, 00:29:44.853 "data_size": 63488 00:29:44.853 } 00:29:44.853 ] 00:29:44.853 }' 00:29:44.853 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:44.853 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:44.853 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:44.853 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:44.853 12:11:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:45.417 [2024-07-21 12:11:44.042181] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:29:45.983 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:45.983 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:45.983 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:45.984 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:45.984 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:45.984 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:45.984 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:45.984 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.984 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:45.984 "name": "raid_bdev1", 00:29:45.984 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:45.984 "strip_size_kb": 0, 00:29:45.984 "state": "online", 00:29:45.984 "raid_level": "raid1", 00:29:45.984 "superblock": true, 00:29:45.984 "num_base_bdevs": 4, 00:29:45.984 "num_base_bdevs_discovered": 3, 00:29:45.984 "num_base_bdevs_operational": 3, 00:29:45.984 "process": { 00:29:45.984 "type": "rebuild", 00:29:45.984 "target": "spare", 00:29:45.984 "progress": { 00:29:45.984 "blocks": 51200, 00:29:45.984 "percent": 80 00:29:45.984 } 00:29:45.984 }, 00:29:45.984 "base_bdevs_list": [ 00:29:45.984 { 00:29:45.984 "name": "spare", 00:29:45.984 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:45.984 "is_configured": true, 00:29:45.984 "data_offset": 2048, 00:29:45.984 "data_size": 63488 00:29:45.984 }, 00:29:45.984 { 00:29:45.984 "name": null, 00:29:45.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:45.984 "is_configured": false, 00:29:45.984 "data_offset": 2048, 00:29:45.984 "data_size": 63488 00:29:45.984 }, 00:29:45.984 { 00:29:45.984 "name": "BaseBdev3", 00:29:45.984 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:45.984 "is_configured": true, 00:29:45.984 "data_offset": 2048, 00:29:45.984 "data_size": 63488 00:29:45.984 }, 00:29:45.984 { 00:29:45.984 "name": "BaseBdev4", 00:29:45.984 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:45.984 "is_configured": true, 00:29:45.984 "data_offset": 2048, 00:29:45.984 "data_size": 63488 00:29:45.984 } 00:29:45.984 ] 00:29:45.984 }' 00:29:45.984 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:45.984 [2024-07-21 12:11:44.813119] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:29:46.242 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:46.242 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:46.242 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:46.242 12:11:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@710 -- # sleep 1 00:29:46.500 [2024-07-21 12:11:45.138935] bdev_raid.c: 839:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:29:46.500 [2024-07-21 12:11:45.360048] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:29:46.757 [2024-07-21 12:11:45.455363] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:29:46.757 [2024-07-21 12:11:45.458540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.323 12:11:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.323 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.323 "name": "raid_bdev1", 00:29:47.323 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:47.323 "strip_size_kb": 0, 00:29:47.323 "state": "online", 00:29:47.323 "raid_level": "raid1", 00:29:47.323 "superblock": true, 00:29:47.323 "num_base_bdevs": 4, 00:29:47.323 "num_base_bdevs_discovered": 3, 00:29:47.323 "num_base_bdevs_operational": 3, 00:29:47.323 "base_bdevs_list": [ 00:29:47.323 { 00:29:47.323 "name": "spare", 00:29:47.323 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:47.323 "is_configured": true, 00:29:47.323 "data_offset": 2048, 00:29:47.323 "data_size": 63488 00:29:47.323 }, 00:29:47.323 { 00:29:47.323 "name": null, 00:29:47.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.323 "is_configured": false, 00:29:47.323 "data_offset": 2048, 00:29:47.323 "data_size": 63488 00:29:47.323 }, 00:29:47.323 { 00:29:47.323 "name": "BaseBdev3", 00:29:47.323 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:47.323 "is_configured": true, 00:29:47.323 "data_offset": 2048, 00:29:47.323 "data_size": 63488 00:29:47.323 }, 00:29:47.323 { 00:29:47.323 "name": "BaseBdev4", 00:29:47.323 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:47.323 "is_configured": true, 00:29:47.323 "data_offset": 2048, 00:29:47.323 "data_size": 63488 00:29:47.323 } 00:29:47.323 ] 00:29:47.323 }' 00:29:47.323 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # break 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:47.582 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:47.840 "name": "raid_bdev1", 00:29:47.840 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:47.840 "strip_size_kb": 0, 00:29:47.840 "state": "online", 00:29:47.840 "raid_level": "raid1", 00:29:47.840 "superblock": true, 00:29:47.840 "num_base_bdevs": 4, 00:29:47.840 "num_base_bdevs_discovered": 3, 00:29:47.840 "num_base_bdevs_operational": 3, 00:29:47.840 "base_bdevs_list": [ 00:29:47.840 { 00:29:47.840 "name": "spare", 00:29:47.840 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:47.840 "is_configured": true, 00:29:47.840 "data_offset": 2048, 00:29:47.840 "data_size": 63488 00:29:47.840 }, 00:29:47.840 { 00:29:47.840 "name": null, 00:29:47.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:47.840 "is_configured": false, 00:29:47.840 "data_offset": 2048, 00:29:47.840 "data_size": 63488 00:29:47.840 }, 00:29:47.840 { 00:29:47.840 "name": "BaseBdev3", 00:29:47.840 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:47.840 "is_configured": true, 00:29:47.840 "data_offset": 2048, 00:29:47.840 "data_size": 63488 00:29:47.840 }, 00:29:47.840 { 00:29:47.840 "name": "BaseBdev4", 00:29:47.840 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:47.840 "is_configured": true, 00:29:47.840 "data_offset": 2048, 00:29:47.840 "data_size": 63488 00:29:47.840 } 00:29:47.840 ] 00:29:47.840 }' 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.840 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:48.098 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:48.098 "name": "raid_bdev1", 00:29:48.098 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:48.098 "strip_size_kb": 0, 00:29:48.098 "state": "online", 00:29:48.098 "raid_level": "raid1", 00:29:48.098 "superblock": true, 00:29:48.098 "num_base_bdevs": 4, 00:29:48.098 "num_base_bdevs_discovered": 3, 00:29:48.098 "num_base_bdevs_operational": 3, 00:29:48.098 "base_bdevs_list": [ 00:29:48.098 { 00:29:48.098 "name": "spare", 00:29:48.098 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:48.098 "is_configured": true, 00:29:48.098 "data_offset": 2048, 00:29:48.098 "data_size": 63488 00:29:48.098 }, 00:29:48.098 { 00:29:48.098 "name": null, 00:29:48.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:48.098 "is_configured": false, 00:29:48.098 "data_offset": 2048, 00:29:48.098 "data_size": 63488 00:29:48.098 }, 00:29:48.098 { 00:29:48.098 "name": "BaseBdev3", 00:29:48.098 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:48.098 "is_configured": true, 00:29:48.098 "data_offset": 2048, 00:29:48.098 "data_size": 63488 00:29:48.098 }, 00:29:48.098 { 00:29:48.098 "name": "BaseBdev4", 00:29:48.098 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:48.098 "is_configured": true, 00:29:48.098 "data_offset": 2048, 00:29:48.098 "data_size": 63488 00:29:48.098 } 00:29:48.098 ] 00:29:48.098 }' 00:29:48.098 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:48.098 12:11:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:49.034 12:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:29:49.034 [2024-07-21 12:11:47.783758] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:49.034 [2024-07-21 12:11:47.784183] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:49.034 00:29:49.034 Latency(us) 00:29:49.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.034 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:29:49.034 raid_bdev1 : 11.68 95.36 286.07 0.00 0.00 14948.65 299.75 122969.37 00:29:49.034 =================================================================================================================== 00:29:49.034 Total : 95.36 286.07 0.00 0.00 14948.65 299.75 122969.37 00:29:49.034 [2024-07-21 12:11:47.861767] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:49.034 [2024-07-21 12:11:47.861974] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:49.034 [2024-07-21 12:11:47.862135] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:49.034 0 00:29:49.034 [2024-07-21 12:11:47.862358] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:29:49.034 12:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:49.034 12:11:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # jq length 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@724 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:29:49.599 /dev/nbd0 00:29:49.599 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:49.857 1+0 records in 00:29:49.857 1+0 records out 00:29:49.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615831 s, 6.7 MB/s 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z '' ']' 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # continue 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev3 ']' 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:49.857 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:29:49.857 /dev/nbd1 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:50.114 1+0 records in 00:29:50.114 1+0 records out 00:29:50.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418404 s, 9.8 MB/s 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.114 12:11:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # for bdev in "${base_bdevs[@]:1}" 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # '[' -z BaseBdev4 ']' 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@729 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:50.373 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:29:50.631 /dev/nbd1 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@865 -- # local i 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # break 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:50.631 1+0 records in 00:29:50.631 1+0 records out 00:29:50.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778769 s, 5.3 MB/s 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@882 -- # size=4096 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # return 0 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.631 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:29:50.888 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@733 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.889 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:29:51.148 12:11:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:51.415 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:51.691 [2024-07-21 12:11:50.435086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:51.691 [2024-07-21 12:11:50.435580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:51.691 [2024-07-21 12:11:50.435756] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:29:51.691 [2024-07-21 12:11:50.435913] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:51.691 [2024-07-21 12:11:50.438738] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:51.691 [2024-07-21 12:11:50.438923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:51.691 [2024-07-21 12:11:50.439155] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:51.691 [2024-07-21 12:11:50.439328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:51.691 [2024-07-21 12:11:50.439731] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:51.691 [2024-07-21 12:11:50.440006] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:51.691 spare 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:51.691 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:51.691 [2024-07-21 12:11:50.540274] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:29:51.691 [2024-07-21 12:11:50.540652] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:51.691 [2024-07-21 12:11:50.541075] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:29:51.691 [2024-07-21 12:11:50.541881] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:29:51.691 [2024-07-21 12:11:50.542017] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:29:51.691 [2024-07-21 12:11:50.542306] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:51.957 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:51.957 "name": "raid_bdev1", 00:29:51.957 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:51.957 "strip_size_kb": 0, 00:29:51.957 "state": "online", 00:29:51.957 "raid_level": "raid1", 00:29:51.957 "superblock": true, 00:29:51.957 "num_base_bdevs": 4, 00:29:51.957 "num_base_bdevs_discovered": 3, 00:29:51.957 "num_base_bdevs_operational": 3, 00:29:51.957 "base_bdevs_list": [ 00:29:51.957 { 00:29:51.957 "name": "spare", 00:29:51.957 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:51.957 "is_configured": true, 00:29:51.957 "data_offset": 2048, 00:29:51.957 "data_size": 63488 00:29:51.957 }, 00:29:51.957 { 00:29:51.957 "name": null, 00:29:51.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:51.957 "is_configured": false, 00:29:51.957 "data_offset": 2048, 00:29:51.957 "data_size": 63488 00:29:51.957 }, 00:29:51.957 { 00:29:51.957 "name": "BaseBdev3", 00:29:51.957 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:51.957 "is_configured": true, 00:29:51.957 "data_offset": 2048, 00:29:51.958 "data_size": 63488 00:29:51.958 }, 00:29:51.958 { 00:29:51.958 "name": "BaseBdev4", 00:29:51.958 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:51.958 "is_configured": true, 00:29:51.958 "data_offset": 2048, 00:29:51.958 "data_size": 63488 00:29:51.958 } 00:29:51.958 ] 00:29:51.958 }' 00:29:51.958 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:51.958 12:11:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:52.524 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:52.524 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:52.524 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:52.524 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:52.524 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:52.524 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.524 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.781 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:52.781 "name": "raid_bdev1", 00:29:52.781 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:52.781 "strip_size_kb": 0, 00:29:52.781 "state": "online", 00:29:52.781 "raid_level": "raid1", 00:29:52.781 "superblock": true, 00:29:52.781 "num_base_bdevs": 4, 00:29:52.781 "num_base_bdevs_discovered": 3, 00:29:52.781 "num_base_bdevs_operational": 3, 00:29:52.781 "base_bdevs_list": [ 00:29:52.781 { 00:29:52.781 "name": "spare", 00:29:52.781 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:52.781 "is_configured": true, 00:29:52.781 "data_offset": 2048, 00:29:52.781 "data_size": 63488 00:29:52.781 }, 00:29:52.781 { 00:29:52.781 "name": null, 00:29:52.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:52.781 "is_configured": false, 00:29:52.781 "data_offset": 2048, 00:29:52.781 "data_size": 63488 00:29:52.781 }, 00:29:52.781 { 00:29:52.781 "name": "BaseBdev3", 00:29:52.781 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:52.781 "is_configured": true, 00:29:52.781 "data_offset": 2048, 00:29:52.781 "data_size": 63488 00:29:52.781 }, 00:29:52.781 { 00:29:52.781 "name": "BaseBdev4", 00:29:52.781 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:52.781 "is_configured": true, 00:29:52.781 "data_offset": 2048, 00:29:52.781 "data_size": 63488 00:29:52.781 } 00:29:52.781 ] 00:29:52.782 }' 00:29:52.782 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:52.782 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:29:52.782 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:52.782 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:29:52.782 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:52.782 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:29:53.039 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:29:53.039 12:11:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:29:53.297 [2024-07-21 12:11:52.091781] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:53.297 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:53.556 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:53.556 "name": "raid_bdev1", 00:29:53.556 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:53.556 "strip_size_kb": 0, 00:29:53.556 "state": "online", 00:29:53.556 "raid_level": "raid1", 00:29:53.556 "superblock": true, 00:29:53.556 "num_base_bdevs": 4, 00:29:53.556 "num_base_bdevs_discovered": 2, 00:29:53.556 "num_base_bdevs_operational": 2, 00:29:53.556 "base_bdevs_list": [ 00:29:53.556 { 00:29:53.556 "name": null, 00:29:53.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.556 "is_configured": false, 00:29:53.556 "data_offset": 2048, 00:29:53.556 "data_size": 63488 00:29:53.556 }, 00:29:53.556 { 00:29:53.556 "name": null, 00:29:53.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:53.556 "is_configured": false, 00:29:53.556 "data_offset": 2048, 00:29:53.556 "data_size": 63488 00:29:53.556 }, 00:29:53.556 { 00:29:53.556 "name": "BaseBdev3", 00:29:53.556 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:53.556 "is_configured": true, 00:29:53.556 "data_offset": 2048, 00:29:53.556 "data_size": 63488 00:29:53.556 }, 00:29:53.556 { 00:29:53.556 "name": "BaseBdev4", 00:29:53.556 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:53.556 "is_configured": true, 00:29:53.556 "data_offset": 2048, 00:29:53.556 "data_size": 63488 00:29:53.556 } 00:29:53.556 ] 00:29:53.556 }' 00:29:53.556 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:53.556 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:54.123 12:11:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:29:54.380 [2024-07-21 12:11:53.152744] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:54.381 [2024-07-21 12:11:53.153283] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:54.381 [2024-07-21 12:11:53.153445] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:54.381 [2024-07-21 12:11:53.153576] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:54.381 [2024-07-21 12:11:53.159662] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:29:54.381 [2024-07-21 12:11:53.162091] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:54.381 12:11:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # sleep 1 00:29:55.314 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:55.314 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:55.314 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:55.314 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:55.314 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:55.571 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:55.572 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:55.829 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:55.829 "name": "raid_bdev1", 00:29:55.829 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:55.829 "strip_size_kb": 0, 00:29:55.829 "state": "online", 00:29:55.829 "raid_level": "raid1", 00:29:55.829 "superblock": true, 00:29:55.829 "num_base_bdevs": 4, 00:29:55.829 "num_base_bdevs_discovered": 3, 00:29:55.829 "num_base_bdevs_operational": 3, 00:29:55.829 "process": { 00:29:55.829 "type": "rebuild", 00:29:55.829 "target": "spare", 00:29:55.829 "progress": { 00:29:55.829 "blocks": 24576, 00:29:55.829 "percent": 38 00:29:55.829 } 00:29:55.829 }, 00:29:55.829 "base_bdevs_list": [ 00:29:55.829 { 00:29:55.829 "name": "spare", 00:29:55.829 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:55.829 "is_configured": true, 00:29:55.829 "data_offset": 2048, 00:29:55.829 "data_size": 63488 00:29:55.829 }, 00:29:55.829 { 00:29:55.829 "name": null, 00:29:55.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.829 "is_configured": false, 00:29:55.829 "data_offset": 2048, 00:29:55.829 "data_size": 63488 00:29:55.829 }, 00:29:55.829 { 00:29:55.829 "name": "BaseBdev3", 00:29:55.829 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:55.829 "is_configured": true, 00:29:55.829 "data_offset": 2048, 00:29:55.829 "data_size": 63488 00:29:55.829 }, 00:29:55.829 { 00:29:55.829 "name": "BaseBdev4", 00:29:55.829 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:55.829 "is_configured": true, 00:29:55.829 "data_offset": 2048, 00:29:55.829 "data_size": 63488 00:29:55.829 } 00:29:55.829 ] 00:29:55.829 }' 00:29:55.829 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:55.829 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:55.829 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:55.829 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:55.829 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:56.086 [2024-07-21 12:11:54.753747] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:56.086 [2024-07-21 12:11:54.772673] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:56.086 [2024-07-21 12:11:54.772936] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:56.086 [2024-07-21 12:11:54.773072] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:56.086 [2024-07-21 12:11:54.773126] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:56.086 12:11:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:56.344 12:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:56.344 "name": "raid_bdev1", 00:29:56.344 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:56.344 "strip_size_kb": 0, 00:29:56.344 "state": "online", 00:29:56.344 "raid_level": "raid1", 00:29:56.344 "superblock": true, 00:29:56.344 "num_base_bdevs": 4, 00:29:56.344 "num_base_bdevs_discovered": 2, 00:29:56.344 "num_base_bdevs_operational": 2, 00:29:56.344 "base_bdevs_list": [ 00:29:56.344 { 00:29:56.344 "name": null, 00:29:56.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.344 "is_configured": false, 00:29:56.344 "data_offset": 2048, 00:29:56.344 "data_size": 63488 00:29:56.344 }, 00:29:56.344 { 00:29:56.344 "name": null, 00:29:56.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.344 "is_configured": false, 00:29:56.344 "data_offset": 2048, 00:29:56.344 "data_size": 63488 00:29:56.344 }, 00:29:56.344 { 00:29:56.344 "name": "BaseBdev3", 00:29:56.344 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:56.344 "is_configured": true, 00:29:56.344 "data_offset": 2048, 00:29:56.344 "data_size": 63488 00:29:56.344 }, 00:29:56.344 { 00:29:56.344 "name": "BaseBdev4", 00:29:56.344 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:56.344 "is_configured": true, 00:29:56.344 "data_offset": 2048, 00:29:56.344 "data_size": 63488 00:29:56.344 } 00:29:56.344 ] 00:29:56.344 }' 00:29:56.344 12:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:56.344 12:11:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:56.910 12:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:29:57.168 [2024-07-21 12:11:55.971883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:29:57.168 [2024-07-21 12:11:55.972926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:57.168 [2024-07-21 12:11:55.973026] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:29:57.168 [2024-07-21 12:11:55.973168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:57.168 [2024-07-21 12:11:55.973943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:57.168 [2024-07-21 12:11:55.974129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:29:57.168 [2024-07-21 12:11:55.974393] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:29:57.168 [2024-07-21 12:11:55.974518] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:29:57.168 [2024-07-21 12:11:55.974666] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:29:57.168 [2024-07-21 12:11:55.974778] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:29:57.168 [2024-07-21 12:11:55.980907] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037640 00:29:57.168 spare 00:29:57.168 [2024-07-21 12:11:55.983279] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:29:57.168 12:11:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # sleep 1 00:29:58.543 12:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:29:58.543 12:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:58.543 12:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:29:58.543 12:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=spare 00:29:58.543 12:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:58.543 12:11:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:58.543 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:58.543 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:58.543 "name": "raid_bdev1", 00:29:58.543 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:58.543 "strip_size_kb": 0, 00:29:58.543 "state": "online", 00:29:58.543 "raid_level": "raid1", 00:29:58.543 "superblock": true, 00:29:58.543 "num_base_bdevs": 4, 00:29:58.543 "num_base_bdevs_discovered": 3, 00:29:58.543 "num_base_bdevs_operational": 3, 00:29:58.543 "process": { 00:29:58.543 "type": "rebuild", 00:29:58.543 "target": "spare", 00:29:58.543 "progress": { 00:29:58.543 "blocks": 24576, 00:29:58.543 "percent": 38 00:29:58.543 } 00:29:58.543 }, 00:29:58.543 "base_bdevs_list": [ 00:29:58.543 { 00:29:58.543 "name": "spare", 00:29:58.543 "uuid": "3b98c491-fb2a-5c8f-b3f8-afb87b7d4401", 00:29:58.543 "is_configured": true, 00:29:58.543 "data_offset": 2048, 00:29:58.543 "data_size": 63488 00:29:58.543 }, 00:29:58.543 { 00:29:58.543 "name": null, 00:29:58.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.543 "is_configured": false, 00:29:58.543 "data_offset": 2048, 00:29:58.543 "data_size": 63488 00:29:58.543 }, 00:29:58.543 { 00:29:58.543 "name": "BaseBdev3", 00:29:58.543 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:58.543 "is_configured": true, 00:29:58.543 "data_offset": 2048, 00:29:58.543 "data_size": 63488 00:29:58.543 }, 00:29:58.543 { 00:29:58.543 "name": "BaseBdev4", 00:29:58.543 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:58.543 "is_configured": true, 00:29:58.543 "data_offset": 2048, 00:29:58.543 "data_size": 63488 00:29:58.543 } 00:29:58.543 ] 00:29:58.543 }' 00:29:58.543 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:29:58.543 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:29:58.543 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:29:58.543 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:29:58.543 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:29:58.801 [2024-07-21 12:11:57.602214] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.059 [2024-07-21 12:11:57.695310] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:29:59.059 [2024-07-21 12:11:57.695623] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:59.059 [2024-07-21 12:11:57.695759] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:29:59.059 [2024-07-21 12:11:57.695804] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.059 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.317 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:29:59.317 "name": "raid_bdev1", 00:29:59.317 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:59.317 "strip_size_kb": 0, 00:29:59.317 "state": "online", 00:29:59.318 "raid_level": "raid1", 00:29:59.318 "superblock": true, 00:29:59.318 "num_base_bdevs": 4, 00:29:59.318 "num_base_bdevs_discovered": 2, 00:29:59.318 "num_base_bdevs_operational": 2, 00:29:59.318 "base_bdevs_list": [ 00:29:59.318 { 00:29:59.318 "name": null, 00:29:59.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.318 "is_configured": false, 00:29:59.318 "data_offset": 2048, 00:29:59.318 "data_size": 63488 00:29:59.318 }, 00:29:59.318 { 00:29:59.318 "name": null, 00:29:59.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.318 "is_configured": false, 00:29:59.318 "data_offset": 2048, 00:29:59.318 "data_size": 63488 00:29:59.318 }, 00:29:59.318 { 00:29:59.318 "name": "BaseBdev3", 00:29:59.318 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:59.318 "is_configured": true, 00:29:59.318 "data_offset": 2048, 00:29:59.318 "data_size": 63488 00:29:59.318 }, 00:29:59.318 { 00:29:59.318 "name": "BaseBdev4", 00:29:59.318 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:59.318 "is_configured": true, 00:29:59.318 "data_offset": 2048, 00:29:59.318 "data_size": 63488 00:29:59.318 } 00:29:59.318 ] 00:29:59.318 }' 00:29:59.318 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:29:59.318 12:11:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:59.883 "name": "raid_bdev1", 00:29:59.883 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:29:59.883 "strip_size_kb": 0, 00:29:59.883 "state": "online", 00:29:59.883 "raid_level": "raid1", 00:29:59.883 "superblock": true, 00:29:59.883 "num_base_bdevs": 4, 00:29:59.883 "num_base_bdevs_discovered": 2, 00:29:59.883 "num_base_bdevs_operational": 2, 00:29:59.883 "base_bdevs_list": [ 00:29:59.883 { 00:29:59.883 "name": null, 00:29:59.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.883 "is_configured": false, 00:29:59.883 "data_offset": 2048, 00:29:59.883 "data_size": 63488 00:29:59.883 }, 00:29:59.883 { 00:29:59.883 "name": null, 00:29:59.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.883 "is_configured": false, 00:29:59.883 "data_offset": 2048, 00:29:59.883 "data_size": 63488 00:29:59.883 }, 00:29:59.883 { 00:29:59.883 "name": "BaseBdev3", 00:29:59.883 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:29:59.883 "is_configured": true, 00:29:59.883 "data_offset": 2048, 00:29:59.883 "data_size": 63488 00:29:59.883 }, 00:29:59.883 { 00:29:59.883 "name": "BaseBdev4", 00:29:59.883 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:29:59.883 "is_configured": true, 00:29:59.883 "data_offset": 2048, 00:29:59.883 "data_size": 63488 00:29:59.883 } 00:29:59.883 ] 00:29:59.883 }' 00:29:59.883 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:00.140 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:00.140 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:00.140 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:00.140 12:11:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:30:00.398 12:11:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:30:00.398 [2024-07-21 12:11:59.259205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:30:00.398 [2024-07-21 12:11:59.259606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:00.398 [2024-07-21 12:11:59.259721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:30:00.398 [2024-07-21 12:11:59.259970] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:00.398 [2024-07-21 12:11:59.260546] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:00.398 [2024-07-21 12:11:59.260719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:00.398 [2024-07-21 12:11:59.261006] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:30:00.398 [2024-07-21 12:11:59.261129] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:00.398 [2024-07-21 12:11:59.261259] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:00.398 BaseBdev1 00:30:00.656 12:11:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # sleep 1 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:01.591 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:01.849 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:01.849 "name": "raid_bdev1", 00:30:01.849 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:30:01.849 "strip_size_kb": 0, 00:30:01.849 "state": "online", 00:30:01.849 "raid_level": "raid1", 00:30:01.849 "superblock": true, 00:30:01.849 "num_base_bdevs": 4, 00:30:01.849 "num_base_bdevs_discovered": 2, 00:30:01.849 "num_base_bdevs_operational": 2, 00:30:01.849 "base_bdevs_list": [ 00:30:01.849 { 00:30:01.849 "name": null, 00:30:01.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.849 "is_configured": false, 00:30:01.849 "data_offset": 2048, 00:30:01.849 "data_size": 63488 00:30:01.849 }, 00:30:01.849 { 00:30:01.849 "name": null, 00:30:01.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.849 "is_configured": false, 00:30:01.849 "data_offset": 2048, 00:30:01.849 "data_size": 63488 00:30:01.849 }, 00:30:01.849 { 00:30:01.849 "name": "BaseBdev3", 00:30:01.849 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:30:01.849 "is_configured": true, 00:30:01.849 "data_offset": 2048, 00:30:01.849 "data_size": 63488 00:30:01.849 }, 00:30:01.849 { 00:30:01.849 "name": "BaseBdev4", 00:30:01.849 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:30:01.849 "is_configured": true, 00:30:01.849 "data_offset": 2048, 00:30:01.849 "data_size": 63488 00:30:01.849 } 00:30:01.849 ] 00:30:01.849 }' 00:30:01.849 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:01.849 12:12:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:02.416 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:02.416 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:02.416 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:02.416 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:02.416 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:02.416 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:02.416 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:02.674 "name": "raid_bdev1", 00:30:02.674 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:30:02.674 "strip_size_kb": 0, 00:30:02.674 "state": "online", 00:30:02.674 "raid_level": "raid1", 00:30:02.674 "superblock": true, 00:30:02.674 "num_base_bdevs": 4, 00:30:02.674 "num_base_bdevs_discovered": 2, 00:30:02.674 "num_base_bdevs_operational": 2, 00:30:02.674 "base_bdevs_list": [ 00:30:02.674 { 00:30:02.674 "name": null, 00:30:02.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.674 "is_configured": false, 00:30:02.674 "data_offset": 2048, 00:30:02.674 "data_size": 63488 00:30:02.674 }, 00:30:02.674 { 00:30:02.674 "name": null, 00:30:02.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.674 "is_configured": false, 00:30:02.674 "data_offset": 2048, 00:30:02.674 "data_size": 63488 00:30:02.674 }, 00:30:02.674 { 00:30:02.674 "name": "BaseBdev3", 00:30:02.674 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:30:02.674 "is_configured": true, 00:30:02.674 "data_offset": 2048, 00:30:02.674 "data_size": 63488 00:30:02.674 }, 00:30:02.674 { 00:30:02.674 "name": "BaseBdev4", 00:30:02.674 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:30:02.674 "is_configured": true, 00:30:02.674 "data_offset": 2048, 00:30:02.674 "data_size": 63488 00:30:02.674 } 00:30:02.674 ] 00:30:02.674 }' 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@648 -- # local es=0 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:30:02.674 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:30:02.931 [2024-07-21 12:12:01.655890] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:02.931 [2024-07-21 12:12:01.656263] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:30:02.931 [2024-07-21 12:12:01.656387] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:30:02.931 request: 00:30:02.931 { 00:30:02.931 "raid_bdev": "raid_bdev1", 00:30:02.931 "base_bdev": "BaseBdev1", 00:30:02.931 "method": "bdev_raid_add_base_bdev", 00:30:02.931 "req_id": 1 00:30:02.931 } 00:30:02.931 Got JSON-RPC error response 00:30:02.931 response: 00:30:02.931 { 00:30:02.931 "code": -22, 00:30:02.931 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:30:02.931 } 00:30:02.931 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@651 -- # es=1 00:30:02.931 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:02.931 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:02.931 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:02.931 12:12:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # sleep 1 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:03.866 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:04.125 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:04.125 "name": "raid_bdev1", 00:30:04.125 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:30:04.125 "strip_size_kb": 0, 00:30:04.125 "state": "online", 00:30:04.125 "raid_level": "raid1", 00:30:04.125 "superblock": true, 00:30:04.125 "num_base_bdevs": 4, 00:30:04.125 "num_base_bdevs_discovered": 2, 00:30:04.125 "num_base_bdevs_operational": 2, 00:30:04.125 "base_bdevs_list": [ 00:30:04.125 { 00:30:04.125 "name": null, 00:30:04.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.125 "is_configured": false, 00:30:04.125 "data_offset": 2048, 00:30:04.125 "data_size": 63488 00:30:04.125 }, 00:30:04.125 { 00:30:04.125 "name": null, 00:30:04.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:04.125 "is_configured": false, 00:30:04.125 "data_offset": 2048, 00:30:04.125 "data_size": 63488 00:30:04.125 }, 00:30:04.125 { 00:30:04.125 "name": "BaseBdev3", 00:30:04.125 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:30:04.125 "is_configured": true, 00:30:04.125 "data_offset": 2048, 00:30:04.125 "data_size": 63488 00:30:04.125 }, 00:30:04.125 { 00:30:04.125 "name": "BaseBdev4", 00:30:04.125 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:30:04.125 "is_configured": true, 00:30:04.125 "data_offset": 2048, 00:30:04.125 "data_size": 63488 00:30:04.125 } 00:30:04.125 ] 00:30:04.125 }' 00:30:04.125 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:04.125 12:12:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@184 -- # local target=none 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:05.058 "name": "raid_bdev1", 00:30:05.058 "uuid": "916368ea-b529-4543-bccf-11a5c90668e1", 00:30:05.058 "strip_size_kb": 0, 00:30:05.058 "state": "online", 00:30:05.058 "raid_level": "raid1", 00:30:05.058 "superblock": true, 00:30:05.058 "num_base_bdevs": 4, 00:30:05.058 "num_base_bdevs_discovered": 2, 00:30:05.058 "num_base_bdevs_operational": 2, 00:30:05.058 "base_bdevs_list": [ 00:30:05.058 { 00:30:05.058 "name": null, 00:30:05.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.058 "is_configured": false, 00:30:05.058 "data_offset": 2048, 00:30:05.058 "data_size": 63488 00:30:05.058 }, 00:30:05.058 { 00:30:05.058 "name": null, 00:30:05.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:05.058 "is_configured": false, 00:30:05.058 "data_offset": 2048, 00:30:05.058 "data_size": 63488 00:30:05.058 }, 00:30:05.058 { 00:30:05.058 "name": "BaseBdev3", 00:30:05.058 "uuid": "4e5a9294-b0b1-5f35-bd0c-fa61f550ccb6", 00:30:05.058 "is_configured": true, 00:30:05.058 "data_offset": 2048, 00:30:05.058 "data_size": 63488 00:30:05.058 }, 00:30:05.058 { 00:30:05.058 "name": "BaseBdev4", 00:30:05.058 "uuid": "0dd6d4cd-25f8-5287-829a-3967b4d821c2", 00:30:05.058 "is_configured": true, 00:30:05.058 "data_offset": 2048, 00:30:05.058 "data_size": 63488 00:30:05.058 } 00:30:05.058 ] 00:30:05.058 }' 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:30:05.058 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@782 -- # killprocess 159339 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@946 -- # '[' -z 159339 ']' 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # kill -0 159339 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # uname 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 159339 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # echo 'killing process with pid 159339' 00:30:05.316 killing process with pid 159339 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@965 -- # kill 159339 00:30:05.316 Received shutdown signal, test time was about 27.780335 seconds 00:30:05.316 00:30:05.316 Latency(us) 00:30:05.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.316 =================================================================================================================== 00:30:05.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:05.316 [2024-07-21 12:12:03.953290] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:05.316 12:12:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # wait 159339 00:30:05.316 [2024-07-21 12:12:03.953463] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:05.316 [2024-07-21 12:12:03.953564] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:05.316 [2024-07-21 12:12:03.953577] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:30:05.316 [2024-07-21 12:12:04.010087] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:05.574 12:12:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # return 0 00:30:05.574 00:30:05.574 real 0m33.483s 00:30:05.574 user 0m54.574s 00:30:05.574 sys 0m3.618s 00:30:05.574 12:12:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:05.574 12:12:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:30:05.574 ************************************ 00:30:05.574 END TEST raid_rebuild_test_sb_io 00:30:05.574 ************************************ 00:30:05.574 12:12:04 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' y == y ']' 00:30:05.574 12:12:04 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:30:05.574 12:12:04 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:30:05.574 12:12:04 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:30:05.574 12:12:04 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:05.574 12:12:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:05.574 ************************************ 00:30:05.575 START TEST raid5f_state_function_test 00:30:05.575 ************************************ 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 false 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=160247 00:30:05.575 Process raid pid: 160247 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 160247' 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 160247 /var/tmp/spdk-raid.sock 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 160247 ']' 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:05.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:05.575 12:12:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.833 [2024-07-21 12:12:04.492013] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:05.833 [2024-07-21 12:12:04.492377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.833 [2024-07-21 12:12:04.682574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.092 [2024-07-21 12:12:04.774386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.092 [2024-07-21 12:12:04.849338] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:06.658 12:12:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:06.658 12:12:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:30:06.658 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:06.916 [2024-07-21 12:12:05.685162] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:06.916 [2024-07-21 12:12:05.685256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:06.916 [2024-07-21 12:12:05.685270] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:06.916 [2024-07-21 12:12:05.685294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:06.916 [2024-07-21 12:12:05.685301] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:06.916 [2024-07-21 12:12:05.685341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:06.916 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:07.174 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:07.174 "name": "Existed_Raid", 00:30:07.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.174 "strip_size_kb": 64, 00:30:07.174 "state": "configuring", 00:30:07.174 "raid_level": "raid5f", 00:30:07.174 "superblock": false, 00:30:07.174 "num_base_bdevs": 3, 00:30:07.174 "num_base_bdevs_discovered": 0, 00:30:07.174 "num_base_bdevs_operational": 3, 00:30:07.174 "base_bdevs_list": [ 00:30:07.174 { 00:30:07.174 "name": "BaseBdev1", 00:30:07.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.174 "is_configured": false, 00:30:07.174 "data_offset": 0, 00:30:07.174 "data_size": 0 00:30:07.174 }, 00:30:07.174 { 00:30:07.174 "name": "BaseBdev2", 00:30:07.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.174 "is_configured": false, 00:30:07.174 "data_offset": 0, 00:30:07.174 "data_size": 0 00:30:07.174 }, 00:30:07.174 { 00:30:07.174 "name": "BaseBdev3", 00:30:07.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.174 "is_configured": false, 00:30:07.174 "data_offset": 0, 00:30:07.174 "data_size": 0 00:30:07.174 } 00:30:07.174 ] 00:30:07.174 }' 00:30:07.174 12:12:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:07.174 12:12:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:07.740 12:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:07.998 [2024-07-21 12:12:06.841343] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:07.998 [2024-07-21 12:12:06.841408] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:30:07.998 12:12:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:08.257 [2024-07-21 12:12:07.093302] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:08.257 [2024-07-21 12:12:07.093385] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:08.257 [2024-07-21 12:12:07.093398] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:08.257 [2024-07-21 12:12:07.093419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:08.257 [2024-07-21 12:12:07.093427] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:08.257 [2024-07-21 12:12:07.093452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:08.257 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:08.515 [2024-07-21 12:12:07.307751] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:08.515 BaseBdev1 00:30:08.515 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:08.515 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:08.515 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:08.515 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:08.515 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:08.515 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:08.515 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:08.773 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:09.031 [ 00:30:09.031 { 00:30:09.031 "name": "BaseBdev1", 00:30:09.031 "aliases": [ 00:30:09.031 "076bf47b-0cf7-4029-afa9-ac10da0ce866" 00:30:09.031 ], 00:30:09.031 "product_name": "Malloc disk", 00:30:09.031 "block_size": 512, 00:30:09.031 "num_blocks": 65536, 00:30:09.031 "uuid": "076bf47b-0cf7-4029-afa9-ac10da0ce866", 00:30:09.031 "assigned_rate_limits": { 00:30:09.031 "rw_ios_per_sec": 0, 00:30:09.031 "rw_mbytes_per_sec": 0, 00:30:09.031 "r_mbytes_per_sec": 0, 00:30:09.031 "w_mbytes_per_sec": 0 00:30:09.032 }, 00:30:09.032 "claimed": true, 00:30:09.032 "claim_type": "exclusive_write", 00:30:09.032 "zoned": false, 00:30:09.032 "supported_io_types": { 00:30:09.032 "read": true, 00:30:09.032 "write": true, 00:30:09.032 "unmap": true, 00:30:09.032 "write_zeroes": true, 00:30:09.032 "flush": true, 00:30:09.032 "reset": true, 00:30:09.032 "compare": false, 00:30:09.032 "compare_and_write": false, 00:30:09.032 "abort": true, 00:30:09.032 "nvme_admin": false, 00:30:09.032 "nvme_io": false 00:30:09.032 }, 00:30:09.032 "memory_domains": [ 00:30:09.032 { 00:30:09.032 "dma_device_id": "system", 00:30:09.032 "dma_device_type": 1 00:30:09.032 }, 00:30:09.032 { 00:30:09.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.032 "dma_device_type": 2 00:30:09.032 } 00:30:09.032 ], 00:30:09.032 "driver_specific": {} 00:30:09.032 } 00:30:09.032 ] 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:09.032 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.290 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:09.290 "name": "Existed_Raid", 00:30:09.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.290 "strip_size_kb": 64, 00:30:09.290 "state": "configuring", 00:30:09.291 "raid_level": "raid5f", 00:30:09.291 "superblock": false, 00:30:09.291 "num_base_bdevs": 3, 00:30:09.291 "num_base_bdevs_discovered": 1, 00:30:09.291 "num_base_bdevs_operational": 3, 00:30:09.291 "base_bdevs_list": [ 00:30:09.291 { 00:30:09.291 "name": "BaseBdev1", 00:30:09.291 "uuid": "076bf47b-0cf7-4029-afa9-ac10da0ce866", 00:30:09.291 "is_configured": true, 00:30:09.291 "data_offset": 0, 00:30:09.291 "data_size": 65536 00:30:09.291 }, 00:30:09.291 { 00:30:09.291 "name": "BaseBdev2", 00:30:09.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.291 "is_configured": false, 00:30:09.291 "data_offset": 0, 00:30:09.291 "data_size": 0 00:30:09.291 }, 00:30:09.291 { 00:30:09.291 "name": "BaseBdev3", 00:30:09.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:09.291 "is_configured": false, 00:30:09.291 "data_offset": 0, 00:30:09.291 "data_size": 0 00:30:09.291 } 00:30:09.291 ] 00:30:09.291 }' 00:30:09.291 12:12:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:09.291 12:12:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:09.916 12:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:10.175 [2024-07-21 12:12:08.792056] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:10.175 [2024-07-21 12:12:08.792117] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:30:10.175 12:12:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:10.175 [2024-07-21 12:12:08.996113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:10.175 [2024-07-21 12:12:08.998243] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:10.175 [2024-07-21 12:12:08.998300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:10.175 [2024-07-21 12:12:08.998311] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:10.175 [2024-07-21 12:12:08.998354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:10.175 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:10.434 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:10.434 "name": "Existed_Raid", 00:30:10.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.434 "strip_size_kb": 64, 00:30:10.434 "state": "configuring", 00:30:10.434 "raid_level": "raid5f", 00:30:10.434 "superblock": false, 00:30:10.434 "num_base_bdevs": 3, 00:30:10.434 "num_base_bdevs_discovered": 1, 00:30:10.434 "num_base_bdevs_operational": 3, 00:30:10.434 "base_bdevs_list": [ 00:30:10.434 { 00:30:10.434 "name": "BaseBdev1", 00:30:10.434 "uuid": "076bf47b-0cf7-4029-afa9-ac10da0ce866", 00:30:10.435 "is_configured": true, 00:30:10.435 "data_offset": 0, 00:30:10.435 "data_size": 65536 00:30:10.435 }, 00:30:10.435 { 00:30:10.435 "name": "BaseBdev2", 00:30:10.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.435 "is_configured": false, 00:30:10.435 "data_offset": 0, 00:30:10.435 "data_size": 0 00:30:10.435 }, 00:30:10.435 { 00:30:10.435 "name": "BaseBdev3", 00:30:10.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.435 "is_configured": false, 00:30:10.435 "data_offset": 0, 00:30:10.435 "data_size": 0 00:30:10.435 } 00:30:10.435 ] 00:30:10.435 }' 00:30:10.435 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:10.435 12:12:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:11.007 12:12:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:11.266 [2024-07-21 12:12:10.071842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:11.266 BaseBdev2 00:30:11.266 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:11.266 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:11.266 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:11.266 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:11.266 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:11.266 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:11.266 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:11.524 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:11.782 [ 00:30:11.782 { 00:30:11.782 "name": "BaseBdev2", 00:30:11.782 "aliases": [ 00:30:11.782 "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf" 00:30:11.782 ], 00:30:11.782 "product_name": "Malloc disk", 00:30:11.782 "block_size": 512, 00:30:11.782 "num_blocks": 65536, 00:30:11.782 "uuid": "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf", 00:30:11.782 "assigned_rate_limits": { 00:30:11.782 "rw_ios_per_sec": 0, 00:30:11.782 "rw_mbytes_per_sec": 0, 00:30:11.782 "r_mbytes_per_sec": 0, 00:30:11.782 "w_mbytes_per_sec": 0 00:30:11.782 }, 00:30:11.782 "claimed": true, 00:30:11.782 "claim_type": "exclusive_write", 00:30:11.782 "zoned": false, 00:30:11.782 "supported_io_types": { 00:30:11.782 "read": true, 00:30:11.782 "write": true, 00:30:11.782 "unmap": true, 00:30:11.782 "write_zeroes": true, 00:30:11.782 "flush": true, 00:30:11.782 "reset": true, 00:30:11.782 "compare": false, 00:30:11.782 "compare_and_write": false, 00:30:11.782 "abort": true, 00:30:11.782 "nvme_admin": false, 00:30:11.782 "nvme_io": false 00:30:11.782 }, 00:30:11.782 "memory_domains": [ 00:30:11.782 { 00:30:11.782 "dma_device_id": "system", 00:30:11.782 "dma_device_type": 1 00:30:11.782 }, 00:30:11.782 { 00:30:11.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.782 "dma_device_type": 2 00:30:11.783 } 00:30:11.783 ], 00:30:11.783 "driver_specific": {} 00:30:11.783 } 00:30:11.783 ] 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:11.783 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.041 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:12.041 "name": "Existed_Raid", 00:30:12.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.041 "strip_size_kb": 64, 00:30:12.041 "state": "configuring", 00:30:12.041 "raid_level": "raid5f", 00:30:12.041 "superblock": false, 00:30:12.041 "num_base_bdevs": 3, 00:30:12.041 "num_base_bdevs_discovered": 2, 00:30:12.041 "num_base_bdevs_operational": 3, 00:30:12.041 "base_bdevs_list": [ 00:30:12.041 { 00:30:12.041 "name": "BaseBdev1", 00:30:12.041 "uuid": "076bf47b-0cf7-4029-afa9-ac10da0ce866", 00:30:12.041 "is_configured": true, 00:30:12.041 "data_offset": 0, 00:30:12.041 "data_size": 65536 00:30:12.041 }, 00:30:12.041 { 00:30:12.041 "name": "BaseBdev2", 00:30:12.041 "uuid": "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf", 00:30:12.041 "is_configured": true, 00:30:12.041 "data_offset": 0, 00:30:12.041 "data_size": 65536 00:30:12.041 }, 00:30:12.041 { 00:30:12.041 "name": "BaseBdev3", 00:30:12.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:12.041 "is_configured": false, 00:30:12.041 "data_offset": 0, 00:30:12.041 "data_size": 0 00:30:12.041 } 00:30:12.041 ] 00:30:12.041 }' 00:30:12.041 12:12:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:12.041 12:12:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:12.606 12:12:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:12.864 [2024-07-21 12:12:11.595966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:12.864 [2024-07-21 12:12:11.596062] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:30:12.864 [2024-07-21 12:12:11.596075] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:12.864 [2024-07-21 12:12:11.596207] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:30:12.864 [2024-07-21 12:12:11.597121] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:30:12.864 [2024-07-21 12:12:11.597147] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:30:12.864 [2024-07-21 12:12:11.597418] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:12.864 BaseBdev3 00:30:12.864 12:12:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:12.864 12:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:12.864 12:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:12.864 12:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:12.864 12:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:12.864 12:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:12.864 12:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:13.123 12:12:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:13.381 [ 00:30:13.381 { 00:30:13.381 "name": "BaseBdev3", 00:30:13.381 "aliases": [ 00:30:13.381 "700a1dd1-3b79-425e-9540-160291f13b4d" 00:30:13.381 ], 00:30:13.381 "product_name": "Malloc disk", 00:30:13.381 "block_size": 512, 00:30:13.381 "num_blocks": 65536, 00:30:13.381 "uuid": "700a1dd1-3b79-425e-9540-160291f13b4d", 00:30:13.381 "assigned_rate_limits": { 00:30:13.381 "rw_ios_per_sec": 0, 00:30:13.381 "rw_mbytes_per_sec": 0, 00:30:13.381 "r_mbytes_per_sec": 0, 00:30:13.381 "w_mbytes_per_sec": 0 00:30:13.381 }, 00:30:13.381 "claimed": true, 00:30:13.381 "claim_type": "exclusive_write", 00:30:13.381 "zoned": false, 00:30:13.381 "supported_io_types": { 00:30:13.381 "read": true, 00:30:13.381 "write": true, 00:30:13.381 "unmap": true, 00:30:13.381 "write_zeroes": true, 00:30:13.381 "flush": true, 00:30:13.381 "reset": true, 00:30:13.381 "compare": false, 00:30:13.381 "compare_and_write": false, 00:30:13.381 "abort": true, 00:30:13.381 "nvme_admin": false, 00:30:13.381 "nvme_io": false 00:30:13.381 }, 00:30:13.381 "memory_domains": [ 00:30:13.381 { 00:30:13.381 "dma_device_id": "system", 00:30:13.381 "dma_device_type": 1 00:30:13.381 }, 00:30:13.381 { 00:30:13.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:13.381 "dma_device_type": 2 00:30:13.381 } 00:30:13.381 ], 00:30:13.381 "driver_specific": {} 00:30:13.381 } 00:30:13.381 ] 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:13.381 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:13.639 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:13.639 "name": "Existed_Raid", 00:30:13.639 "uuid": "7bca259a-52ce-4036-ba49-2029f05d48ee", 00:30:13.639 "strip_size_kb": 64, 00:30:13.639 "state": "online", 00:30:13.639 "raid_level": "raid5f", 00:30:13.639 "superblock": false, 00:30:13.639 "num_base_bdevs": 3, 00:30:13.639 "num_base_bdevs_discovered": 3, 00:30:13.639 "num_base_bdevs_operational": 3, 00:30:13.639 "base_bdevs_list": [ 00:30:13.639 { 00:30:13.639 "name": "BaseBdev1", 00:30:13.639 "uuid": "076bf47b-0cf7-4029-afa9-ac10da0ce866", 00:30:13.639 "is_configured": true, 00:30:13.639 "data_offset": 0, 00:30:13.639 "data_size": 65536 00:30:13.639 }, 00:30:13.639 { 00:30:13.639 "name": "BaseBdev2", 00:30:13.639 "uuid": "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf", 00:30:13.639 "is_configured": true, 00:30:13.639 "data_offset": 0, 00:30:13.639 "data_size": 65536 00:30:13.639 }, 00:30:13.639 { 00:30:13.639 "name": "BaseBdev3", 00:30:13.639 "uuid": "700a1dd1-3b79-425e-9540-160291f13b4d", 00:30:13.639 "is_configured": true, 00:30:13.639 "data_offset": 0, 00:30:13.639 "data_size": 65536 00:30:13.639 } 00:30:13.639 ] 00:30:13.639 }' 00:30:13.639 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:13.639 12:12:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:14.203 12:12:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:14.461 [2024-07-21 12:12:13.256612] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:14.461 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:14.461 "name": "Existed_Raid", 00:30:14.461 "aliases": [ 00:30:14.461 "7bca259a-52ce-4036-ba49-2029f05d48ee" 00:30:14.461 ], 00:30:14.461 "product_name": "Raid Volume", 00:30:14.461 "block_size": 512, 00:30:14.461 "num_blocks": 131072, 00:30:14.461 "uuid": "7bca259a-52ce-4036-ba49-2029f05d48ee", 00:30:14.461 "assigned_rate_limits": { 00:30:14.461 "rw_ios_per_sec": 0, 00:30:14.461 "rw_mbytes_per_sec": 0, 00:30:14.461 "r_mbytes_per_sec": 0, 00:30:14.461 "w_mbytes_per_sec": 0 00:30:14.461 }, 00:30:14.461 "claimed": false, 00:30:14.461 "zoned": false, 00:30:14.461 "supported_io_types": { 00:30:14.461 "read": true, 00:30:14.461 "write": true, 00:30:14.461 "unmap": false, 00:30:14.461 "write_zeroes": true, 00:30:14.461 "flush": false, 00:30:14.461 "reset": true, 00:30:14.461 "compare": false, 00:30:14.461 "compare_and_write": false, 00:30:14.461 "abort": false, 00:30:14.461 "nvme_admin": false, 00:30:14.461 "nvme_io": false 00:30:14.461 }, 00:30:14.461 "driver_specific": { 00:30:14.461 "raid": { 00:30:14.461 "uuid": "7bca259a-52ce-4036-ba49-2029f05d48ee", 00:30:14.461 "strip_size_kb": 64, 00:30:14.461 "state": "online", 00:30:14.461 "raid_level": "raid5f", 00:30:14.461 "superblock": false, 00:30:14.461 "num_base_bdevs": 3, 00:30:14.461 "num_base_bdevs_discovered": 3, 00:30:14.461 "num_base_bdevs_operational": 3, 00:30:14.461 "base_bdevs_list": [ 00:30:14.461 { 00:30:14.461 "name": "BaseBdev1", 00:30:14.461 "uuid": "076bf47b-0cf7-4029-afa9-ac10da0ce866", 00:30:14.461 "is_configured": true, 00:30:14.461 "data_offset": 0, 00:30:14.461 "data_size": 65536 00:30:14.461 }, 00:30:14.461 { 00:30:14.461 "name": "BaseBdev2", 00:30:14.461 "uuid": "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf", 00:30:14.461 "is_configured": true, 00:30:14.461 "data_offset": 0, 00:30:14.461 "data_size": 65536 00:30:14.461 }, 00:30:14.461 { 00:30:14.461 "name": "BaseBdev3", 00:30:14.461 "uuid": "700a1dd1-3b79-425e-9540-160291f13b4d", 00:30:14.461 "is_configured": true, 00:30:14.461 "data_offset": 0, 00:30:14.461 "data_size": 65536 00:30:14.461 } 00:30:14.461 ] 00:30:14.461 } 00:30:14.461 } 00:30:14.461 }' 00:30:14.461 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:14.461 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:14.461 BaseBdev2 00:30:14.461 BaseBdev3' 00:30:14.461 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:14.718 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:14.718 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:14.975 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:14.975 "name": "BaseBdev1", 00:30:14.975 "aliases": [ 00:30:14.975 "076bf47b-0cf7-4029-afa9-ac10da0ce866" 00:30:14.975 ], 00:30:14.976 "product_name": "Malloc disk", 00:30:14.976 "block_size": 512, 00:30:14.976 "num_blocks": 65536, 00:30:14.976 "uuid": "076bf47b-0cf7-4029-afa9-ac10da0ce866", 00:30:14.976 "assigned_rate_limits": { 00:30:14.976 "rw_ios_per_sec": 0, 00:30:14.976 "rw_mbytes_per_sec": 0, 00:30:14.976 "r_mbytes_per_sec": 0, 00:30:14.976 "w_mbytes_per_sec": 0 00:30:14.976 }, 00:30:14.976 "claimed": true, 00:30:14.976 "claim_type": "exclusive_write", 00:30:14.976 "zoned": false, 00:30:14.976 "supported_io_types": { 00:30:14.976 "read": true, 00:30:14.976 "write": true, 00:30:14.976 "unmap": true, 00:30:14.976 "write_zeroes": true, 00:30:14.976 "flush": true, 00:30:14.976 "reset": true, 00:30:14.976 "compare": false, 00:30:14.976 "compare_and_write": false, 00:30:14.976 "abort": true, 00:30:14.976 "nvme_admin": false, 00:30:14.976 "nvme_io": false 00:30:14.976 }, 00:30:14.976 "memory_domains": [ 00:30:14.976 { 00:30:14.976 "dma_device_id": "system", 00:30:14.976 "dma_device_type": 1 00:30:14.976 }, 00:30:14.976 { 00:30:14.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.976 "dma_device_type": 2 00:30:14.976 } 00:30:14.976 ], 00:30:14.976 "driver_specific": {} 00:30:14.976 }' 00:30:14.976 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:14.976 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:14.976 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:14.976 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:14.976 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:14.976 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:14.976 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:15.233 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:15.233 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:15.233 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:15.233 12:12:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:15.233 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:15.233 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:15.233 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:15.233 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:15.489 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:15.489 "name": "BaseBdev2", 00:30:15.489 "aliases": [ 00:30:15.489 "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf" 00:30:15.489 ], 00:30:15.489 "product_name": "Malloc disk", 00:30:15.489 "block_size": 512, 00:30:15.489 "num_blocks": 65536, 00:30:15.489 "uuid": "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf", 00:30:15.489 "assigned_rate_limits": { 00:30:15.489 "rw_ios_per_sec": 0, 00:30:15.489 "rw_mbytes_per_sec": 0, 00:30:15.489 "r_mbytes_per_sec": 0, 00:30:15.489 "w_mbytes_per_sec": 0 00:30:15.489 }, 00:30:15.489 "claimed": true, 00:30:15.489 "claim_type": "exclusive_write", 00:30:15.489 "zoned": false, 00:30:15.490 "supported_io_types": { 00:30:15.490 "read": true, 00:30:15.490 "write": true, 00:30:15.490 "unmap": true, 00:30:15.490 "write_zeroes": true, 00:30:15.490 "flush": true, 00:30:15.490 "reset": true, 00:30:15.490 "compare": false, 00:30:15.490 "compare_and_write": false, 00:30:15.490 "abort": true, 00:30:15.490 "nvme_admin": false, 00:30:15.490 "nvme_io": false 00:30:15.490 }, 00:30:15.490 "memory_domains": [ 00:30:15.490 { 00:30:15.490 "dma_device_id": "system", 00:30:15.490 "dma_device_type": 1 00:30:15.490 }, 00:30:15.490 { 00:30:15.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:15.490 "dma_device_type": 2 00:30:15.490 } 00:30:15.490 ], 00:30:15.490 "driver_specific": {} 00:30:15.490 }' 00:30:15.490 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:15.747 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:16.005 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:16.005 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:16.005 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:16.005 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:16.005 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:16.263 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:16.263 "name": "BaseBdev3", 00:30:16.263 "aliases": [ 00:30:16.263 "700a1dd1-3b79-425e-9540-160291f13b4d" 00:30:16.263 ], 00:30:16.263 "product_name": "Malloc disk", 00:30:16.263 "block_size": 512, 00:30:16.263 "num_blocks": 65536, 00:30:16.263 "uuid": "700a1dd1-3b79-425e-9540-160291f13b4d", 00:30:16.263 "assigned_rate_limits": { 00:30:16.263 "rw_ios_per_sec": 0, 00:30:16.263 "rw_mbytes_per_sec": 0, 00:30:16.263 "r_mbytes_per_sec": 0, 00:30:16.263 "w_mbytes_per_sec": 0 00:30:16.263 }, 00:30:16.263 "claimed": true, 00:30:16.263 "claim_type": "exclusive_write", 00:30:16.263 "zoned": false, 00:30:16.263 "supported_io_types": { 00:30:16.263 "read": true, 00:30:16.263 "write": true, 00:30:16.263 "unmap": true, 00:30:16.263 "write_zeroes": true, 00:30:16.263 "flush": true, 00:30:16.263 "reset": true, 00:30:16.263 "compare": false, 00:30:16.263 "compare_and_write": false, 00:30:16.263 "abort": true, 00:30:16.263 "nvme_admin": false, 00:30:16.263 "nvme_io": false 00:30:16.263 }, 00:30:16.263 "memory_domains": [ 00:30:16.263 { 00:30:16.263 "dma_device_id": "system", 00:30:16.263 "dma_device_type": 1 00:30:16.263 }, 00:30:16.263 { 00:30:16.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:16.263 "dma_device_type": 2 00:30:16.263 } 00:30:16.263 ], 00:30:16.263 "driver_specific": {} 00:30:16.263 }' 00:30:16.263 12:12:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:16.263 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:16.263 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:16.263 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:16.263 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:16.522 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:16.780 [2024-07-21 12:12:15.555478] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:16.780 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:17.039 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:17.039 "name": "Existed_Raid", 00:30:17.039 "uuid": "7bca259a-52ce-4036-ba49-2029f05d48ee", 00:30:17.039 "strip_size_kb": 64, 00:30:17.039 "state": "online", 00:30:17.039 "raid_level": "raid5f", 00:30:17.039 "superblock": false, 00:30:17.039 "num_base_bdevs": 3, 00:30:17.039 "num_base_bdevs_discovered": 2, 00:30:17.039 "num_base_bdevs_operational": 2, 00:30:17.039 "base_bdevs_list": [ 00:30:17.039 { 00:30:17.039 "name": null, 00:30:17.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:17.039 "is_configured": false, 00:30:17.039 "data_offset": 0, 00:30:17.039 "data_size": 65536 00:30:17.039 }, 00:30:17.039 { 00:30:17.039 "name": "BaseBdev2", 00:30:17.039 "uuid": "00e5e3d4-c23e-4c22-ae49-29ce487ed6cf", 00:30:17.039 "is_configured": true, 00:30:17.039 "data_offset": 0, 00:30:17.039 "data_size": 65536 00:30:17.039 }, 00:30:17.039 { 00:30:17.039 "name": "BaseBdev3", 00:30:17.039 "uuid": "700a1dd1-3b79-425e-9540-160291f13b4d", 00:30:17.039 "is_configured": true, 00:30:17.039 "data_offset": 0, 00:30:17.039 "data_size": 65536 00:30:17.039 } 00:30:17.039 ] 00:30:17.039 }' 00:30:17.039 12:12:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:17.039 12:12:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.620 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:17.620 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:17.620 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:17.620 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:17.877 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:17.877 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:17.877 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:18.135 [2024-07-21 12:12:16.961640] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:18.135 [2024-07-21 12:12:16.961744] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:18.135 [2024-07-21 12:12:16.971318] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:18.135 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:18.135 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:18.135 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.135 12:12:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:18.393 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:18.393 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:18.393 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:18.651 [2024-07-21 12:12:17.437925] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:18.651 [2024-07-21 12:12:17.438004] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:30:18.651 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:18.651 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:18.651 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:18.651 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:18.908 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:18.908 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:18.908 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:30:18.908 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:18.908 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:18.908 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:19.166 BaseBdev2 00:30:19.166 12:12:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:19.166 12:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:19.166 12:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:19.166 12:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:19.166 12:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:19.166 12:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:19.166 12:12:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:19.423 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:19.681 [ 00:30:19.681 { 00:30:19.681 "name": "BaseBdev2", 00:30:19.681 "aliases": [ 00:30:19.681 "c32ece75-9543-45bc-9df3-43a6ae9d9e2b" 00:30:19.681 ], 00:30:19.681 "product_name": "Malloc disk", 00:30:19.681 "block_size": 512, 00:30:19.681 "num_blocks": 65536, 00:30:19.681 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:19.681 "assigned_rate_limits": { 00:30:19.681 "rw_ios_per_sec": 0, 00:30:19.681 "rw_mbytes_per_sec": 0, 00:30:19.681 "r_mbytes_per_sec": 0, 00:30:19.681 "w_mbytes_per_sec": 0 00:30:19.681 }, 00:30:19.681 "claimed": false, 00:30:19.681 "zoned": false, 00:30:19.681 "supported_io_types": { 00:30:19.681 "read": true, 00:30:19.681 "write": true, 00:30:19.681 "unmap": true, 00:30:19.681 "write_zeroes": true, 00:30:19.681 "flush": true, 00:30:19.681 "reset": true, 00:30:19.681 "compare": false, 00:30:19.681 "compare_and_write": false, 00:30:19.681 "abort": true, 00:30:19.681 "nvme_admin": false, 00:30:19.681 "nvme_io": false 00:30:19.681 }, 00:30:19.681 "memory_domains": [ 00:30:19.681 { 00:30:19.681 "dma_device_id": "system", 00:30:19.681 "dma_device_type": 1 00:30:19.681 }, 00:30:19.681 { 00:30:19.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:19.681 "dma_device_type": 2 00:30:19.681 } 00:30:19.681 ], 00:30:19.681 "driver_specific": {} 00:30:19.681 } 00:30:19.681 ] 00:30:19.681 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:19.681 12:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:19.681 12:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:19.681 12:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:19.938 BaseBdev3 00:30:19.938 12:12:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:19.938 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:19.938 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:19.938 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:19.938 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:19.938 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:19.938 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:19.939 12:12:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:20.196 [ 00:30:20.196 { 00:30:20.196 "name": "BaseBdev3", 00:30:20.196 "aliases": [ 00:30:20.196 "2e5e876a-ebea-4c27-971c-cb86082c7b63" 00:30:20.196 ], 00:30:20.196 "product_name": "Malloc disk", 00:30:20.196 "block_size": 512, 00:30:20.196 "num_blocks": 65536, 00:30:20.196 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:20.196 "assigned_rate_limits": { 00:30:20.196 "rw_ios_per_sec": 0, 00:30:20.196 "rw_mbytes_per_sec": 0, 00:30:20.196 "r_mbytes_per_sec": 0, 00:30:20.196 "w_mbytes_per_sec": 0 00:30:20.196 }, 00:30:20.196 "claimed": false, 00:30:20.196 "zoned": false, 00:30:20.196 "supported_io_types": { 00:30:20.196 "read": true, 00:30:20.196 "write": true, 00:30:20.196 "unmap": true, 00:30:20.196 "write_zeroes": true, 00:30:20.196 "flush": true, 00:30:20.196 "reset": true, 00:30:20.196 "compare": false, 00:30:20.196 "compare_and_write": false, 00:30:20.196 "abort": true, 00:30:20.196 "nvme_admin": false, 00:30:20.196 "nvme_io": false 00:30:20.196 }, 00:30:20.196 "memory_domains": [ 00:30:20.196 { 00:30:20.196 "dma_device_id": "system", 00:30:20.196 "dma_device_type": 1 00:30:20.196 }, 00:30:20.196 { 00:30:20.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.196 "dma_device_type": 2 00:30:20.196 } 00:30:20.196 ], 00:30:20.196 "driver_specific": {} 00:30:20.196 } 00:30:20.196 ] 00:30:20.196 12:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:20.196 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:20.196 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:20.196 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:20.454 [2024-07-21 12:12:19.210712] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:20.454 [2024-07-21 12:12:19.211311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:20.454 [2024-07-21 12:12:19.211374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:20.454 [2024-07-21 12:12:19.213304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:20.454 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:20.712 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:20.712 "name": "Existed_Raid", 00:30:20.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.712 "strip_size_kb": 64, 00:30:20.712 "state": "configuring", 00:30:20.712 "raid_level": "raid5f", 00:30:20.712 "superblock": false, 00:30:20.712 "num_base_bdevs": 3, 00:30:20.712 "num_base_bdevs_discovered": 2, 00:30:20.712 "num_base_bdevs_operational": 3, 00:30:20.712 "base_bdevs_list": [ 00:30:20.712 { 00:30:20.712 "name": "BaseBdev1", 00:30:20.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:20.712 "is_configured": false, 00:30:20.712 "data_offset": 0, 00:30:20.712 "data_size": 0 00:30:20.712 }, 00:30:20.712 { 00:30:20.712 "name": "BaseBdev2", 00:30:20.712 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:20.712 "is_configured": true, 00:30:20.712 "data_offset": 0, 00:30:20.712 "data_size": 65536 00:30:20.712 }, 00:30:20.712 { 00:30:20.712 "name": "BaseBdev3", 00:30:20.712 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:20.712 "is_configured": true, 00:30:20.712 "data_offset": 0, 00:30:20.712 "data_size": 65536 00:30:20.712 } 00:30:20.712 ] 00:30:20.712 }' 00:30:20.712 12:12:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:20.712 12:12:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.279 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:21.539 [2024-07-21 12:12:20.275058] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:21.539 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:21.797 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:21.797 "name": "Existed_Raid", 00:30:21.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.797 "strip_size_kb": 64, 00:30:21.797 "state": "configuring", 00:30:21.797 "raid_level": "raid5f", 00:30:21.797 "superblock": false, 00:30:21.797 "num_base_bdevs": 3, 00:30:21.797 "num_base_bdevs_discovered": 1, 00:30:21.797 "num_base_bdevs_operational": 3, 00:30:21.797 "base_bdevs_list": [ 00:30:21.797 { 00:30:21.797 "name": "BaseBdev1", 00:30:21.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:21.797 "is_configured": false, 00:30:21.797 "data_offset": 0, 00:30:21.797 "data_size": 0 00:30:21.797 }, 00:30:21.797 { 00:30:21.797 "name": null, 00:30:21.797 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:21.797 "is_configured": false, 00:30:21.797 "data_offset": 0, 00:30:21.797 "data_size": 65536 00:30:21.797 }, 00:30:21.797 { 00:30:21.797 "name": "BaseBdev3", 00:30:21.797 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:21.797 "is_configured": true, 00:30:21.797 "data_offset": 0, 00:30:21.797 "data_size": 65536 00:30:21.797 } 00:30:21.797 ] 00:30:21.797 }' 00:30:21.797 12:12:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:21.797 12:12:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:22.363 12:12:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:22.363 12:12:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:22.621 12:12:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:22.621 12:12:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:22.880 [2024-07-21 12:12:21.632327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:22.880 BaseBdev1 00:30:22.880 12:12:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:22.880 12:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:22.880 12:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:22.880 12:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:22.880 12:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:22.880 12:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:22.880 12:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:23.138 12:12:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:23.396 [ 00:30:23.396 { 00:30:23.396 "name": "BaseBdev1", 00:30:23.396 "aliases": [ 00:30:23.396 "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9" 00:30:23.396 ], 00:30:23.396 "product_name": "Malloc disk", 00:30:23.396 "block_size": 512, 00:30:23.396 "num_blocks": 65536, 00:30:23.396 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:23.396 "assigned_rate_limits": { 00:30:23.396 "rw_ios_per_sec": 0, 00:30:23.396 "rw_mbytes_per_sec": 0, 00:30:23.396 "r_mbytes_per_sec": 0, 00:30:23.396 "w_mbytes_per_sec": 0 00:30:23.396 }, 00:30:23.396 "claimed": true, 00:30:23.396 "claim_type": "exclusive_write", 00:30:23.396 "zoned": false, 00:30:23.396 "supported_io_types": { 00:30:23.396 "read": true, 00:30:23.396 "write": true, 00:30:23.396 "unmap": true, 00:30:23.396 "write_zeroes": true, 00:30:23.396 "flush": true, 00:30:23.396 "reset": true, 00:30:23.396 "compare": false, 00:30:23.396 "compare_and_write": false, 00:30:23.397 "abort": true, 00:30:23.397 "nvme_admin": false, 00:30:23.397 "nvme_io": false 00:30:23.397 }, 00:30:23.397 "memory_domains": [ 00:30:23.397 { 00:30:23.397 "dma_device_id": "system", 00:30:23.397 "dma_device_type": 1 00:30:23.397 }, 00:30:23.397 { 00:30:23.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:23.397 "dma_device_type": 2 00:30:23.397 } 00:30:23.397 ], 00:30:23.397 "driver_specific": {} 00:30:23.397 } 00:30:23.397 ] 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:23.397 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:23.655 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:23.655 "name": "Existed_Raid", 00:30:23.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:23.655 "strip_size_kb": 64, 00:30:23.655 "state": "configuring", 00:30:23.655 "raid_level": "raid5f", 00:30:23.655 "superblock": false, 00:30:23.655 "num_base_bdevs": 3, 00:30:23.655 "num_base_bdevs_discovered": 2, 00:30:23.655 "num_base_bdevs_operational": 3, 00:30:23.655 "base_bdevs_list": [ 00:30:23.655 { 00:30:23.655 "name": "BaseBdev1", 00:30:23.655 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:23.655 "is_configured": true, 00:30:23.655 "data_offset": 0, 00:30:23.655 "data_size": 65536 00:30:23.655 }, 00:30:23.655 { 00:30:23.655 "name": null, 00:30:23.655 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:23.655 "is_configured": false, 00:30:23.655 "data_offset": 0, 00:30:23.655 "data_size": 65536 00:30:23.655 }, 00:30:23.655 { 00:30:23.655 "name": "BaseBdev3", 00:30:23.655 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:23.655 "is_configured": true, 00:30:23.655 "data_offset": 0, 00:30:23.655 "data_size": 65536 00:30:23.655 } 00:30:23.655 ] 00:30:23.655 }' 00:30:23.655 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:23.655 12:12:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.222 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.222 12:12:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:24.479 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:24.479 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:24.479 [2024-07-21 12:12:23.326797] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:24.479 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:24.479 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:24.479 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:24.479 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:24.480 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:24.480 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:24.480 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:24.480 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:24.480 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:24.480 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:24.737 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:24.737 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:24.995 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:24.995 "name": "Existed_Raid", 00:30:24.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:24.995 "strip_size_kb": 64, 00:30:24.995 "state": "configuring", 00:30:24.995 "raid_level": "raid5f", 00:30:24.995 "superblock": false, 00:30:24.995 "num_base_bdevs": 3, 00:30:24.995 "num_base_bdevs_discovered": 1, 00:30:24.995 "num_base_bdevs_operational": 3, 00:30:24.995 "base_bdevs_list": [ 00:30:24.995 { 00:30:24.995 "name": "BaseBdev1", 00:30:24.995 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:24.995 "is_configured": true, 00:30:24.995 "data_offset": 0, 00:30:24.995 "data_size": 65536 00:30:24.995 }, 00:30:24.995 { 00:30:24.995 "name": null, 00:30:24.995 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:24.995 "is_configured": false, 00:30:24.995 "data_offset": 0, 00:30:24.995 "data_size": 65536 00:30:24.995 }, 00:30:24.995 { 00:30:24.995 "name": null, 00:30:24.995 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:24.995 "is_configured": false, 00:30:24.995 "data_offset": 0, 00:30:24.995 "data_size": 65536 00:30:24.995 } 00:30:24.995 ] 00:30:24.995 }' 00:30:24.995 12:12:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:24.995 12:12:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.558 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:25.558 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:25.815 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:25.815 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:25.815 [2024-07-21 12:12:24.679151] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.072 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:26.328 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:26.328 "name": "Existed_Raid", 00:30:26.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:26.328 "strip_size_kb": 64, 00:30:26.328 "state": "configuring", 00:30:26.328 "raid_level": "raid5f", 00:30:26.328 "superblock": false, 00:30:26.328 "num_base_bdevs": 3, 00:30:26.328 "num_base_bdevs_discovered": 2, 00:30:26.328 "num_base_bdevs_operational": 3, 00:30:26.328 "base_bdevs_list": [ 00:30:26.328 { 00:30:26.328 "name": "BaseBdev1", 00:30:26.328 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:26.328 "is_configured": true, 00:30:26.328 "data_offset": 0, 00:30:26.328 "data_size": 65536 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "name": null, 00:30:26.328 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:26.328 "is_configured": false, 00:30:26.328 "data_offset": 0, 00:30:26.328 "data_size": 65536 00:30:26.328 }, 00:30:26.328 { 00:30:26.328 "name": "BaseBdev3", 00:30:26.328 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:26.328 "is_configured": true, 00:30:26.328 "data_offset": 0, 00:30:26.328 "data_size": 65536 00:30:26.328 } 00:30:26.328 ] 00:30:26.328 }' 00:30:26.328 12:12:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:26.328 12:12:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.891 12:12:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:26.891 12:12:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:27.147 12:12:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:27.147 12:12:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:27.147 [2024-07-21 12:12:25.999404] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:27.405 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:27.663 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:27.663 "name": "Existed_Raid", 00:30:27.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:27.663 "strip_size_kb": 64, 00:30:27.663 "state": "configuring", 00:30:27.663 "raid_level": "raid5f", 00:30:27.663 "superblock": false, 00:30:27.663 "num_base_bdevs": 3, 00:30:27.663 "num_base_bdevs_discovered": 1, 00:30:27.663 "num_base_bdevs_operational": 3, 00:30:27.663 "base_bdevs_list": [ 00:30:27.663 { 00:30:27.663 "name": null, 00:30:27.663 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:27.663 "is_configured": false, 00:30:27.663 "data_offset": 0, 00:30:27.663 "data_size": 65536 00:30:27.663 }, 00:30:27.663 { 00:30:27.663 "name": null, 00:30:27.663 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:27.663 "is_configured": false, 00:30:27.663 "data_offset": 0, 00:30:27.663 "data_size": 65536 00:30:27.663 }, 00:30:27.663 { 00:30:27.663 "name": "BaseBdev3", 00:30:27.663 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:27.663 "is_configured": true, 00:30:27.663 "data_offset": 0, 00:30:27.663 "data_size": 65536 00:30:27.663 } 00:30:27.663 ] 00:30:27.663 }' 00:30:27.663 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:27.663 12:12:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.227 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:28.227 12:12:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.485 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:28.485 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:28.744 [2024-07-21 12:12:27.371604] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:28.744 "name": "Existed_Raid", 00:30:28.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:28.744 "strip_size_kb": 64, 00:30:28.744 "state": "configuring", 00:30:28.744 "raid_level": "raid5f", 00:30:28.744 "superblock": false, 00:30:28.744 "num_base_bdevs": 3, 00:30:28.744 "num_base_bdevs_discovered": 2, 00:30:28.744 "num_base_bdevs_operational": 3, 00:30:28.744 "base_bdevs_list": [ 00:30:28.744 { 00:30:28.744 "name": null, 00:30:28.744 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:28.744 "is_configured": false, 00:30:28.744 "data_offset": 0, 00:30:28.744 "data_size": 65536 00:30:28.744 }, 00:30:28.744 { 00:30:28.744 "name": "BaseBdev2", 00:30:28.744 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:28.744 "is_configured": true, 00:30:28.744 "data_offset": 0, 00:30:28.744 "data_size": 65536 00:30:28.744 }, 00:30:28.744 { 00:30:28.744 "name": "BaseBdev3", 00:30:28.744 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:28.744 "is_configured": true, 00:30:28.744 "data_offset": 0, 00:30:28.744 "data_size": 65536 00:30:28.744 } 00:30:28.744 ] 00:30:28.744 }' 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:28.744 12:12:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.376 12:12:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.376 12:12:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:29.635 12:12:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:30:29.635 12:12:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:29.635 12:12:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9 00:30:29.893 [2024-07-21 12:12:28.727125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:29.893 [2024-07-21 12:12:28.727178] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:30:29.893 [2024-07-21 12:12:28.727188] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:30:29.893 [2024-07-21 12:12:28.727273] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:30:29.893 [2024-07-21 12:12:28.727976] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:30:29.893 [2024-07-21 12:12:28.728000] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:30:29.893 [2024-07-21 12:12:28.728213] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:29.893 NewBaseBdev 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:29.893 12:12:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:30.152 12:12:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:30.411 [ 00:30:30.411 { 00:30:30.411 "name": "NewBaseBdev", 00:30:30.411 "aliases": [ 00:30:30.411 "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9" 00:30:30.411 ], 00:30:30.411 "product_name": "Malloc disk", 00:30:30.411 "block_size": 512, 00:30:30.411 "num_blocks": 65536, 00:30:30.411 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:30.411 "assigned_rate_limits": { 00:30:30.411 "rw_ios_per_sec": 0, 00:30:30.411 "rw_mbytes_per_sec": 0, 00:30:30.411 "r_mbytes_per_sec": 0, 00:30:30.411 "w_mbytes_per_sec": 0 00:30:30.411 }, 00:30:30.411 "claimed": true, 00:30:30.411 "claim_type": "exclusive_write", 00:30:30.411 "zoned": false, 00:30:30.411 "supported_io_types": { 00:30:30.411 "read": true, 00:30:30.411 "write": true, 00:30:30.411 "unmap": true, 00:30:30.411 "write_zeroes": true, 00:30:30.411 "flush": true, 00:30:30.411 "reset": true, 00:30:30.411 "compare": false, 00:30:30.411 "compare_and_write": false, 00:30:30.411 "abort": true, 00:30:30.411 "nvme_admin": false, 00:30:30.411 "nvme_io": false 00:30:30.411 }, 00:30:30.411 "memory_domains": [ 00:30:30.411 { 00:30:30.411 "dma_device_id": "system", 00:30:30.411 "dma_device_type": 1 00:30:30.411 }, 00:30:30.411 { 00:30:30.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:30.411 "dma_device_type": 2 00:30:30.411 } 00:30:30.411 ], 00:30:30.411 "driver_specific": {} 00:30:30.411 } 00:30:30.411 ] 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:30.411 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:30.669 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:30.669 "name": "Existed_Raid", 00:30:30.669 "uuid": "14e49376-351e-460c-99b0-4e6963b35044", 00:30:30.669 "strip_size_kb": 64, 00:30:30.669 "state": "online", 00:30:30.669 "raid_level": "raid5f", 00:30:30.669 "superblock": false, 00:30:30.669 "num_base_bdevs": 3, 00:30:30.669 "num_base_bdevs_discovered": 3, 00:30:30.669 "num_base_bdevs_operational": 3, 00:30:30.669 "base_bdevs_list": [ 00:30:30.669 { 00:30:30.669 "name": "NewBaseBdev", 00:30:30.669 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:30.670 "is_configured": true, 00:30:30.670 "data_offset": 0, 00:30:30.670 "data_size": 65536 00:30:30.670 }, 00:30:30.670 { 00:30:30.670 "name": "BaseBdev2", 00:30:30.670 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:30.670 "is_configured": true, 00:30:30.670 "data_offset": 0, 00:30:30.670 "data_size": 65536 00:30:30.670 }, 00:30:30.670 { 00:30:30.670 "name": "BaseBdev3", 00:30:30.670 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:30.670 "is_configured": true, 00:30:30.670 "data_offset": 0, 00:30:30.670 "data_size": 65536 00:30:30.670 } 00:30:30.670 ] 00:30:30.670 }' 00:30:30.670 12:12:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:30.670 12:12:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:31.236 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:31.495 [2024-07-21 12:12:30.227591] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:31.495 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:31.495 "name": "Existed_Raid", 00:30:31.495 "aliases": [ 00:30:31.495 "14e49376-351e-460c-99b0-4e6963b35044" 00:30:31.495 ], 00:30:31.495 "product_name": "Raid Volume", 00:30:31.495 "block_size": 512, 00:30:31.495 "num_blocks": 131072, 00:30:31.495 "uuid": "14e49376-351e-460c-99b0-4e6963b35044", 00:30:31.495 "assigned_rate_limits": { 00:30:31.495 "rw_ios_per_sec": 0, 00:30:31.495 "rw_mbytes_per_sec": 0, 00:30:31.495 "r_mbytes_per_sec": 0, 00:30:31.495 "w_mbytes_per_sec": 0 00:30:31.495 }, 00:30:31.495 "claimed": false, 00:30:31.495 "zoned": false, 00:30:31.495 "supported_io_types": { 00:30:31.495 "read": true, 00:30:31.495 "write": true, 00:30:31.495 "unmap": false, 00:30:31.495 "write_zeroes": true, 00:30:31.495 "flush": false, 00:30:31.495 "reset": true, 00:30:31.495 "compare": false, 00:30:31.495 "compare_and_write": false, 00:30:31.495 "abort": false, 00:30:31.495 "nvme_admin": false, 00:30:31.495 "nvme_io": false 00:30:31.495 }, 00:30:31.495 "driver_specific": { 00:30:31.495 "raid": { 00:30:31.495 "uuid": "14e49376-351e-460c-99b0-4e6963b35044", 00:30:31.495 "strip_size_kb": 64, 00:30:31.495 "state": "online", 00:30:31.495 "raid_level": "raid5f", 00:30:31.495 "superblock": false, 00:30:31.495 "num_base_bdevs": 3, 00:30:31.495 "num_base_bdevs_discovered": 3, 00:30:31.495 "num_base_bdevs_operational": 3, 00:30:31.495 "base_bdevs_list": [ 00:30:31.495 { 00:30:31.495 "name": "NewBaseBdev", 00:30:31.495 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:31.495 "is_configured": true, 00:30:31.495 "data_offset": 0, 00:30:31.495 "data_size": 65536 00:30:31.495 }, 00:30:31.495 { 00:30:31.495 "name": "BaseBdev2", 00:30:31.495 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:31.495 "is_configured": true, 00:30:31.495 "data_offset": 0, 00:30:31.495 "data_size": 65536 00:30:31.495 }, 00:30:31.495 { 00:30:31.495 "name": "BaseBdev3", 00:30:31.495 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:31.495 "is_configured": true, 00:30:31.495 "data_offset": 0, 00:30:31.495 "data_size": 65536 00:30:31.495 } 00:30:31.495 ] 00:30:31.495 } 00:30:31.495 } 00:30:31.495 }' 00:30:31.495 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:31.495 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:30:31.495 BaseBdev2 00:30:31.495 BaseBdev3' 00:30:31.495 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:31.495 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:31.495 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:31.754 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:31.754 "name": "NewBaseBdev", 00:30:31.754 "aliases": [ 00:30:31.754 "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9" 00:30:31.754 ], 00:30:31.754 "product_name": "Malloc disk", 00:30:31.754 "block_size": 512, 00:30:31.754 "num_blocks": 65536, 00:30:31.754 "uuid": "b2af11f0-aa7e-4ffe-9304-7dae9e39f3a9", 00:30:31.754 "assigned_rate_limits": { 00:30:31.754 "rw_ios_per_sec": 0, 00:30:31.754 "rw_mbytes_per_sec": 0, 00:30:31.754 "r_mbytes_per_sec": 0, 00:30:31.754 "w_mbytes_per_sec": 0 00:30:31.754 }, 00:30:31.754 "claimed": true, 00:30:31.754 "claim_type": "exclusive_write", 00:30:31.754 "zoned": false, 00:30:31.754 "supported_io_types": { 00:30:31.754 "read": true, 00:30:31.754 "write": true, 00:30:31.754 "unmap": true, 00:30:31.754 "write_zeroes": true, 00:30:31.754 "flush": true, 00:30:31.754 "reset": true, 00:30:31.754 "compare": false, 00:30:31.754 "compare_and_write": false, 00:30:31.754 "abort": true, 00:30:31.754 "nvme_admin": false, 00:30:31.754 "nvme_io": false 00:30:31.754 }, 00:30:31.754 "memory_domains": [ 00:30:31.754 { 00:30:31.754 "dma_device_id": "system", 00:30:31.754 "dma_device_type": 1 00:30:31.754 }, 00:30:31.754 { 00:30:31.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:31.754 "dma_device_type": 2 00:30:31.754 } 00:30:31.754 ], 00:30:31.754 "driver_specific": {} 00:30:31.754 }' 00:30:31.754 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:31.754 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:32.012 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:32.271 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:32.271 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:32.271 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:32.271 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:32.271 12:12:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:32.529 "name": "BaseBdev2", 00:30:32.529 "aliases": [ 00:30:32.529 "c32ece75-9543-45bc-9df3-43a6ae9d9e2b" 00:30:32.529 ], 00:30:32.529 "product_name": "Malloc disk", 00:30:32.529 "block_size": 512, 00:30:32.529 "num_blocks": 65536, 00:30:32.529 "uuid": "c32ece75-9543-45bc-9df3-43a6ae9d9e2b", 00:30:32.529 "assigned_rate_limits": { 00:30:32.529 "rw_ios_per_sec": 0, 00:30:32.529 "rw_mbytes_per_sec": 0, 00:30:32.529 "r_mbytes_per_sec": 0, 00:30:32.529 "w_mbytes_per_sec": 0 00:30:32.529 }, 00:30:32.529 "claimed": true, 00:30:32.529 "claim_type": "exclusive_write", 00:30:32.529 "zoned": false, 00:30:32.529 "supported_io_types": { 00:30:32.529 "read": true, 00:30:32.529 "write": true, 00:30:32.529 "unmap": true, 00:30:32.529 "write_zeroes": true, 00:30:32.529 "flush": true, 00:30:32.529 "reset": true, 00:30:32.529 "compare": false, 00:30:32.529 "compare_and_write": false, 00:30:32.529 "abort": true, 00:30:32.529 "nvme_admin": false, 00:30:32.529 "nvme_io": false 00:30:32.529 }, 00:30:32.529 "memory_domains": [ 00:30:32.529 { 00:30:32.529 "dma_device_id": "system", 00:30:32.529 "dma_device_type": 1 00:30:32.529 }, 00:30:32.529 { 00:30:32.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:32.529 "dma_device_type": 2 00:30:32.529 } 00:30:32.529 ], 00:30:32.529 "driver_specific": {} 00:30:32.529 }' 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:32.529 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:32.787 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:33.045 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:33.045 "name": "BaseBdev3", 00:30:33.045 "aliases": [ 00:30:33.045 "2e5e876a-ebea-4c27-971c-cb86082c7b63" 00:30:33.045 ], 00:30:33.045 "product_name": "Malloc disk", 00:30:33.045 "block_size": 512, 00:30:33.045 "num_blocks": 65536, 00:30:33.045 "uuid": "2e5e876a-ebea-4c27-971c-cb86082c7b63", 00:30:33.045 "assigned_rate_limits": { 00:30:33.045 "rw_ios_per_sec": 0, 00:30:33.045 "rw_mbytes_per_sec": 0, 00:30:33.045 "r_mbytes_per_sec": 0, 00:30:33.045 "w_mbytes_per_sec": 0 00:30:33.045 }, 00:30:33.045 "claimed": true, 00:30:33.045 "claim_type": "exclusive_write", 00:30:33.045 "zoned": false, 00:30:33.045 "supported_io_types": { 00:30:33.045 "read": true, 00:30:33.045 "write": true, 00:30:33.045 "unmap": true, 00:30:33.045 "write_zeroes": true, 00:30:33.045 "flush": true, 00:30:33.045 "reset": true, 00:30:33.045 "compare": false, 00:30:33.045 "compare_and_write": false, 00:30:33.045 "abort": true, 00:30:33.045 "nvme_admin": false, 00:30:33.045 "nvme_io": false 00:30:33.045 }, 00:30:33.045 "memory_domains": [ 00:30:33.045 { 00:30:33.045 "dma_device_id": "system", 00:30:33.045 "dma_device_type": 1 00:30:33.045 }, 00:30:33.045 { 00:30:33.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:33.045 "dma_device_type": 2 00:30:33.045 } 00:30:33.045 ], 00:30:33.045 "driver_specific": {} 00:30:33.045 }' 00:30:33.045 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.045 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:33.303 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:33.303 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.303 12:12:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:33.303 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:33.303 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:33.303 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:33.303 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:33.303 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:33.560 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:33.561 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:33.561 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:33.561 [2024-07-21 12:12:32.411902] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:33.561 [2024-07-21 12:12:32.412051] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:33.561 [2024-07-21 12:12:32.412249] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:33.561 [2024-07-21 12:12:32.412693] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:33.561 [2024-07-21 12:12:32.412817] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 160247 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 160247 ']' 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 160247 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 160247 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 160247' 00:30:33.819 killing process with pid 160247 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 160247 00:30:33.819 [2024-07-21 12:12:32.447477] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:33.819 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 160247 00:30:33.819 [2024-07-21 12:12:32.483722] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:34.077 12:12:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:30:34.077 00:30:34.077 real 0m28.375s 00:30:34.077 user 0m53.690s 00:30:34.077 sys 0m3.593s 00:30:34.077 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:30:34.077 ************************************ 00:30:34.077 END TEST raid5f_state_function_test 00:30:34.077 ************************************ 00:30:34.077 12:12:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.077 12:12:32 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:30:34.077 12:12:32 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:30:34.078 12:12:32 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.078 12:12:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:34.078 ************************************ 00:30:34.078 START TEST raid5f_state_function_test_sb 00:30:34.078 ************************************ 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 3 true 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=161204 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 161204' 00:30:34.078 Process raid pid: 161204 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 161204 /var/tmp/spdk-raid.sock 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 161204 ']' 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:30:34.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:30:34.078 12:12:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:34.078 [2024-07-21 12:12:32.911648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:30:34.078 [2024-07-21 12:12:32.912104] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.335 [2024-07-21 12:12:33.078002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:34.335 [2024-07-21 12:12:33.148045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:34.592 [2024-07-21 12:12:33.219168] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:35.158 12:12:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:30:35.158 12:12:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:30:35.159 12:12:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:35.416 [2024-07-21 12:12:34.152708] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:35.416 [2024-07-21 12:12:34.152956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:35.416 [2024-07-21 12:12:34.153120] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:35.416 [2024-07-21 12:12:34.153204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:35.416 [2024-07-21 12:12:34.153348] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:35.416 [2024-07-21 12:12:34.153439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:35.416 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.674 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:35.674 "name": "Existed_Raid", 00:30:35.674 "uuid": "42ad7c43-e3a1-433c-95a8-88c4d02404ca", 00:30:35.674 "strip_size_kb": 64, 00:30:35.674 "state": "configuring", 00:30:35.674 "raid_level": "raid5f", 00:30:35.674 "superblock": true, 00:30:35.674 "num_base_bdevs": 3, 00:30:35.674 "num_base_bdevs_discovered": 0, 00:30:35.674 "num_base_bdevs_operational": 3, 00:30:35.674 "base_bdevs_list": [ 00:30:35.674 { 00:30:35.674 "name": "BaseBdev1", 00:30:35.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.674 "is_configured": false, 00:30:35.674 "data_offset": 0, 00:30:35.674 "data_size": 0 00:30:35.674 }, 00:30:35.674 { 00:30:35.674 "name": "BaseBdev2", 00:30:35.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.674 "is_configured": false, 00:30:35.674 "data_offset": 0, 00:30:35.674 "data_size": 0 00:30:35.674 }, 00:30:35.674 { 00:30:35.674 "name": "BaseBdev3", 00:30:35.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.674 "is_configured": false, 00:30:35.674 "data_offset": 0, 00:30:35.674 "data_size": 0 00:30:35.674 } 00:30:35.674 ] 00:30:35.674 }' 00:30:35.674 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:35.674 12:12:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:36.237 12:12:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:36.494 [2024-07-21 12:12:35.228729] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:36.494 [2024-07-21 12:12:35.228942] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:30:36.494 12:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:36.752 [2024-07-21 12:12:35.496792] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:36.752 [2024-07-21 12:12:35.497040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:36.752 [2024-07-21 12:12:35.497160] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:36.752 [2024-07-21 12:12:35.497219] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:36.752 [2024-07-21 12:12:35.497315] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:36.752 [2024-07-21 12:12:35.497377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:36.752 12:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:37.010 [2024-07-21 12:12:35.718743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:37.010 BaseBdev1 00:30:37.010 12:12:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:30:37.010 12:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:37.010 12:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:37.010 12:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:37.010 12:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:37.010 12:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:37.010 12:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:37.268 12:12:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:37.526 [ 00:30:37.526 { 00:30:37.526 "name": "BaseBdev1", 00:30:37.526 "aliases": [ 00:30:37.526 "0fd44856-2ab7-4d18-b393-a6504a42f275" 00:30:37.526 ], 00:30:37.526 "product_name": "Malloc disk", 00:30:37.526 "block_size": 512, 00:30:37.526 "num_blocks": 65536, 00:30:37.526 "uuid": "0fd44856-2ab7-4d18-b393-a6504a42f275", 00:30:37.526 "assigned_rate_limits": { 00:30:37.526 "rw_ios_per_sec": 0, 00:30:37.526 "rw_mbytes_per_sec": 0, 00:30:37.526 "r_mbytes_per_sec": 0, 00:30:37.526 "w_mbytes_per_sec": 0 00:30:37.526 }, 00:30:37.526 "claimed": true, 00:30:37.526 "claim_type": "exclusive_write", 00:30:37.526 "zoned": false, 00:30:37.526 "supported_io_types": { 00:30:37.526 "read": true, 00:30:37.526 "write": true, 00:30:37.526 "unmap": true, 00:30:37.526 "write_zeroes": true, 00:30:37.526 "flush": true, 00:30:37.526 "reset": true, 00:30:37.526 "compare": false, 00:30:37.526 "compare_and_write": false, 00:30:37.526 "abort": true, 00:30:37.526 "nvme_admin": false, 00:30:37.526 "nvme_io": false 00:30:37.526 }, 00:30:37.526 "memory_domains": [ 00:30:37.526 { 00:30:37.526 "dma_device_id": "system", 00:30:37.526 "dma_device_type": 1 00:30:37.526 }, 00:30:37.526 { 00:30:37.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:37.526 "dma_device_type": 2 00:30:37.526 } 00:30:37.526 ], 00:30:37.526 "driver_specific": {} 00:30:37.526 } 00:30:37.526 ] 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:37.526 "name": "Existed_Raid", 00:30:37.526 "uuid": "05a56c27-e214-4d30-b6ad-f01fbf445757", 00:30:37.526 "strip_size_kb": 64, 00:30:37.526 "state": "configuring", 00:30:37.526 "raid_level": "raid5f", 00:30:37.526 "superblock": true, 00:30:37.526 "num_base_bdevs": 3, 00:30:37.526 "num_base_bdevs_discovered": 1, 00:30:37.526 "num_base_bdevs_operational": 3, 00:30:37.526 "base_bdevs_list": [ 00:30:37.526 { 00:30:37.526 "name": "BaseBdev1", 00:30:37.526 "uuid": "0fd44856-2ab7-4d18-b393-a6504a42f275", 00:30:37.526 "is_configured": true, 00:30:37.526 "data_offset": 2048, 00:30:37.526 "data_size": 63488 00:30:37.526 }, 00:30:37.526 { 00:30:37.526 "name": "BaseBdev2", 00:30:37.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.526 "is_configured": false, 00:30:37.526 "data_offset": 0, 00:30:37.526 "data_size": 0 00:30:37.526 }, 00:30:37.526 { 00:30:37.526 "name": "BaseBdev3", 00:30:37.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.526 "is_configured": false, 00:30:37.526 "data_offset": 0, 00:30:37.526 "data_size": 0 00:30:37.526 } 00:30:37.526 ] 00:30:37.526 }' 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:37.526 12:12:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:38.458 12:12:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:30:38.458 [2024-07-21 12:12:37.211072] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:38.458 [2024-07-21 12:12:37.211261] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:30:38.458 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:38.716 [2024-07-21 12:12:37.451187] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:38.716 [2024-07-21 12:12:37.453334] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:38.716 [2024-07-21 12:12:37.453526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:38.716 [2024-07-21 12:12:37.453635] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:38.716 [2024-07-21 12:12:37.453719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:38.716 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.973 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:38.974 "name": "Existed_Raid", 00:30:38.974 "uuid": "804c11f0-4631-45b9-99db-e3e6fed820c3", 00:30:38.974 "strip_size_kb": 64, 00:30:38.974 "state": "configuring", 00:30:38.974 "raid_level": "raid5f", 00:30:38.974 "superblock": true, 00:30:38.974 "num_base_bdevs": 3, 00:30:38.974 "num_base_bdevs_discovered": 1, 00:30:38.974 "num_base_bdevs_operational": 3, 00:30:38.974 "base_bdevs_list": [ 00:30:38.974 { 00:30:38.974 "name": "BaseBdev1", 00:30:38.974 "uuid": "0fd44856-2ab7-4d18-b393-a6504a42f275", 00:30:38.974 "is_configured": true, 00:30:38.974 "data_offset": 2048, 00:30:38.974 "data_size": 63488 00:30:38.974 }, 00:30:38.974 { 00:30:38.974 "name": "BaseBdev2", 00:30:38.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.974 "is_configured": false, 00:30:38.974 "data_offset": 0, 00:30:38.974 "data_size": 0 00:30:38.974 }, 00:30:38.974 { 00:30:38.974 "name": "BaseBdev3", 00:30:38.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.974 "is_configured": false, 00:30:38.974 "data_offset": 0, 00:30:38.974 "data_size": 0 00:30:38.974 } 00:30:38.974 ] 00:30:38.974 }' 00:30:38.974 12:12:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:38.974 12:12:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:39.540 12:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:39.798 [2024-07-21 12:12:38.479218] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:39.798 BaseBdev2 00:30:39.798 12:12:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:30:39.798 12:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:39.798 12:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:39.798 12:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:39.798 12:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:39.798 12:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:39.798 12:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:40.057 12:12:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:40.316 [ 00:30:40.316 { 00:30:40.316 "name": "BaseBdev2", 00:30:40.316 "aliases": [ 00:30:40.316 "da8cdd7a-6123-4a37-9836-8ff863d0d3a6" 00:30:40.316 ], 00:30:40.316 "product_name": "Malloc disk", 00:30:40.316 "block_size": 512, 00:30:40.316 "num_blocks": 65536, 00:30:40.316 "uuid": "da8cdd7a-6123-4a37-9836-8ff863d0d3a6", 00:30:40.316 "assigned_rate_limits": { 00:30:40.316 "rw_ios_per_sec": 0, 00:30:40.316 "rw_mbytes_per_sec": 0, 00:30:40.316 "r_mbytes_per_sec": 0, 00:30:40.316 "w_mbytes_per_sec": 0 00:30:40.316 }, 00:30:40.316 "claimed": true, 00:30:40.316 "claim_type": "exclusive_write", 00:30:40.316 "zoned": false, 00:30:40.316 "supported_io_types": { 00:30:40.316 "read": true, 00:30:40.316 "write": true, 00:30:40.316 "unmap": true, 00:30:40.316 "write_zeroes": true, 00:30:40.316 "flush": true, 00:30:40.316 "reset": true, 00:30:40.316 "compare": false, 00:30:40.316 "compare_and_write": false, 00:30:40.316 "abort": true, 00:30:40.316 "nvme_admin": false, 00:30:40.316 "nvme_io": false 00:30:40.316 }, 00:30:40.316 "memory_domains": [ 00:30:40.316 { 00:30:40.316 "dma_device_id": "system", 00:30:40.316 "dma_device_type": 1 00:30:40.316 }, 00:30:40.316 { 00:30:40.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:40.316 "dma_device_type": 2 00:30:40.316 } 00:30:40.316 ], 00:30:40.316 "driver_specific": {} 00:30:40.316 } 00:30:40.316 ] 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.316 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:40.574 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:40.574 "name": "Existed_Raid", 00:30:40.575 "uuid": "804c11f0-4631-45b9-99db-e3e6fed820c3", 00:30:40.575 "strip_size_kb": 64, 00:30:40.575 "state": "configuring", 00:30:40.575 "raid_level": "raid5f", 00:30:40.575 "superblock": true, 00:30:40.575 "num_base_bdevs": 3, 00:30:40.575 "num_base_bdevs_discovered": 2, 00:30:40.575 "num_base_bdevs_operational": 3, 00:30:40.575 "base_bdevs_list": [ 00:30:40.575 { 00:30:40.575 "name": "BaseBdev1", 00:30:40.575 "uuid": "0fd44856-2ab7-4d18-b393-a6504a42f275", 00:30:40.575 "is_configured": true, 00:30:40.575 "data_offset": 2048, 00:30:40.575 "data_size": 63488 00:30:40.575 }, 00:30:40.575 { 00:30:40.575 "name": "BaseBdev2", 00:30:40.575 "uuid": "da8cdd7a-6123-4a37-9836-8ff863d0d3a6", 00:30:40.575 "is_configured": true, 00:30:40.575 "data_offset": 2048, 00:30:40.575 "data_size": 63488 00:30:40.575 }, 00:30:40.575 { 00:30:40.575 "name": "BaseBdev3", 00:30:40.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.575 "is_configured": false, 00:30:40.575 "data_offset": 0, 00:30:40.575 "data_size": 0 00:30:40.575 } 00:30:40.575 ] 00:30:40.575 }' 00:30:40.575 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:40.575 12:12:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:41.142 12:12:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:41.401 [2024-07-21 12:12:40.059293] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:41.401 [2024-07-21 12:12:40.059821] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:30:41.401 [2024-07-21 12:12:40.059988] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:41.401 [2024-07-21 12:12:40.060163] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:30:41.401 BaseBdev3 00:30:41.401 [2024-07-21 12:12:40.061033] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:30:41.401 [2024-07-21 12:12:40.061228] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:30:41.401 [2024-07-21 12:12:40.061510] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.401 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:30:41.401 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:41.401 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:41.401 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:41.401 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:41.401 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:41.401 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:41.660 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:41.660 [ 00:30:41.660 { 00:30:41.660 "name": "BaseBdev3", 00:30:41.660 "aliases": [ 00:30:41.660 "550b4561-4c4e-4949-8c47-b7bd0b43527c" 00:30:41.660 ], 00:30:41.660 "product_name": "Malloc disk", 00:30:41.660 "block_size": 512, 00:30:41.660 "num_blocks": 65536, 00:30:41.660 "uuid": "550b4561-4c4e-4949-8c47-b7bd0b43527c", 00:30:41.660 "assigned_rate_limits": { 00:30:41.660 "rw_ios_per_sec": 0, 00:30:41.660 "rw_mbytes_per_sec": 0, 00:30:41.660 "r_mbytes_per_sec": 0, 00:30:41.660 "w_mbytes_per_sec": 0 00:30:41.660 }, 00:30:41.660 "claimed": true, 00:30:41.660 "claim_type": "exclusive_write", 00:30:41.660 "zoned": false, 00:30:41.660 "supported_io_types": { 00:30:41.660 "read": true, 00:30:41.660 "write": true, 00:30:41.660 "unmap": true, 00:30:41.660 "write_zeroes": true, 00:30:41.660 "flush": true, 00:30:41.660 "reset": true, 00:30:41.660 "compare": false, 00:30:41.660 "compare_and_write": false, 00:30:41.660 "abort": true, 00:30:41.660 "nvme_admin": false, 00:30:41.660 "nvme_io": false 00:30:41.660 }, 00:30:41.660 "memory_domains": [ 00:30:41.660 { 00:30:41.660 "dma_device_id": "system", 00:30:41.660 "dma_device_type": 1 00:30:41.660 }, 00:30:41.660 { 00:30:41.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.660 "dma_device_type": 2 00:30:41.660 } 00:30:41.660 ], 00:30:41.660 "driver_specific": {} 00:30:41.660 } 00:30:41.660 ] 00:30:41.919 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:41.920 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:42.179 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:42.179 "name": "Existed_Raid", 00:30:42.179 "uuid": "804c11f0-4631-45b9-99db-e3e6fed820c3", 00:30:42.179 "strip_size_kb": 64, 00:30:42.179 "state": "online", 00:30:42.179 "raid_level": "raid5f", 00:30:42.179 "superblock": true, 00:30:42.179 "num_base_bdevs": 3, 00:30:42.179 "num_base_bdevs_discovered": 3, 00:30:42.179 "num_base_bdevs_operational": 3, 00:30:42.179 "base_bdevs_list": [ 00:30:42.179 { 00:30:42.179 "name": "BaseBdev1", 00:30:42.179 "uuid": "0fd44856-2ab7-4d18-b393-a6504a42f275", 00:30:42.179 "is_configured": true, 00:30:42.179 "data_offset": 2048, 00:30:42.179 "data_size": 63488 00:30:42.179 }, 00:30:42.179 { 00:30:42.179 "name": "BaseBdev2", 00:30:42.179 "uuid": "da8cdd7a-6123-4a37-9836-8ff863d0d3a6", 00:30:42.179 "is_configured": true, 00:30:42.179 "data_offset": 2048, 00:30:42.179 "data_size": 63488 00:30:42.179 }, 00:30:42.179 { 00:30:42.179 "name": "BaseBdev3", 00:30:42.179 "uuid": "550b4561-4c4e-4949-8c47-b7bd0b43527c", 00:30:42.179 "is_configured": true, 00:30:42.179 "data_offset": 2048, 00:30:42.179 "data_size": 63488 00:30:42.179 } 00:30:42.179 ] 00:30:42.179 }' 00:30:42.179 12:12:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:42.179 12:12:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:42.748 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:43.007 [2024-07-21 12:12:41.696038] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:43.007 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:43.007 "name": "Existed_Raid", 00:30:43.007 "aliases": [ 00:30:43.007 "804c11f0-4631-45b9-99db-e3e6fed820c3" 00:30:43.007 ], 00:30:43.007 "product_name": "Raid Volume", 00:30:43.007 "block_size": 512, 00:30:43.007 "num_blocks": 126976, 00:30:43.007 "uuid": "804c11f0-4631-45b9-99db-e3e6fed820c3", 00:30:43.007 "assigned_rate_limits": { 00:30:43.007 "rw_ios_per_sec": 0, 00:30:43.007 "rw_mbytes_per_sec": 0, 00:30:43.007 "r_mbytes_per_sec": 0, 00:30:43.007 "w_mbytes_per_sec": 0 00:30:43.007 }, 00:30:43.007 "claimed": false, 00:30:43.007 "zoned": false, 00:30:43.007 "supported_io_types": { 00:30:43.007 "read": true, 00:30:43.007 "write": true, 00:30:43.007 "unmap": false, 00:30:43.007 "write_zeroes": true, 00:30:43.007 "flush": false, 00:30:43.007 "reset": true, 00:30:43.007 "compare": false, 00:30:43.007 "compare_and_write": false, 00:30:43.007 "abort": false, 00:30:43.007 "nvme_admin": false, 00:30:43.008 "nvme_io": false 00:30:43.008 }, 00:30:43.008 "driver_specific": { 00:30:43.008 "raid": { 00:30:43.008 "uuid": "804c11f0-4631-45b9-99db-e3e6fed820c3", 00:30:43.008 "strip_size_kb": 64, 00:30:43.008 "state": "online", 00:30:43.008 "raid_level": "raid5f", 00:30:43.008 "superblock": true, 00:30:43.008 "num_base_bdevs": 3, 00:30:43.008 "num_base_bdevs_discovered": 3, 00:30:43.008 "num_base_bdevs_operational": 3, 00:30:43.008 "base_bdevs_list": [ 00:30:43.008 { 00:30:43.008 "name": "BaseBdev1", 00:30:43.008 "uuid": "0fd44856-2ab7-4d18-b393-a6504a42f275", 00:30:43.008 "is_configured": true, 00:30:43.008 "data_offset": 2048, 00:30:43.008 "data_size": 63488 00:30:43.008 }, 00:30:43.008 { 00:30:43.008 "name": "BaseBdev2", 00:30:43.008 "uuid": "da8cdd7a-6123-4a37-9836-8ff863d0d3a6", 00:30:43.008 "is_configured": true, 00:30:43.008 "data_offset": 2048, 00:30:43.008 "data_size": 63488 00:30:43.008 }, 00:30:43.008 { 00:30:43.008 "name": "BaseBdev3", 00:30:43.008 "uuid": "550b4561-4c4e-4949-8c47-b7bd0b43527c", 00:30:43.008 "is_configured": true, 00:30:43.008 "data_offset": 2048, 00:30:43.008 "data_size": 63488 00:30:43.008 } 00:30:43.008 ] 00:30:43.008 } 00:30:43.008 } 00:30:43.008 }' 00:30:43.008 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:43.008 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:30:43.008 BaseBdev2 00:30:43.008 BaseBdev3' 00:30:43.008 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:43.008 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:30:43.008 12:12:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:43.267 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:43.267 "name": "BaseBdev1", 00:30:43.267 "aliases": [ 00:30:43.267 "0fd44856-2ab7-4d18-b393-a6504a42f275" 00:30:43.267 ], 00:30:43.267 "product_name": "Malloc disk", 00:30:43.267 "block_size": 512, 00:30:43.267 "num_blocks": 65536, 00:30:43.267 "uuid": "0fd44856-2ab7-4d18-b393-a6504a42f275", 00:30:43.267 "assigned_rate_limits": { 00:30:43.267 "rw_ios_per_sec": 0, 00:30:43.267 "rw_mbytes_per_sec": 0, 00:30:43.267 "r_mbytes_per_sec": 0, 00:30:43.267 "w_mbytes_per_sec": 0 00:30:43.267 }, 00:30:43.267 "claimed": true, 00:30:43.267 "claim_type": "exclusive_write", 00:30:43.267 "zoned": false, 00:30:43.267 "supported_io_types": { 00:30:43.267 "read": true, 00:30:43.267 "write": true, 00:30:43.267 "unmap": true, 00:30:43.267 "write_zeroes": true, 00:30:43.267 "flush": true, 00:30:43.267 "reset": true, 00:30:43.267 "compare": false, 00:30:43.267 "compare_and_write": false, 00:30:43.267 "abort": true, 00:30:43.267 "nvme_admin": false, 00:30:43.267 "nvme_io": false 00:30:43.267 }, 00:30:43.267 "memory_domains": [ 00:30:43.267 { 00:30:43.267 "dma_device_id": "system", 00:30:43.267 "dma_device_type": 1 00:30:43.267 }, 00:30:43.267 { 00:30:43.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.267 "dma_device_type": 2 00:30:43.267 } 00:30:43.267 ], 00:30:43.267 "driver_specific": {} 00:30:43.267 }' 00:30:43.267 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:43.267 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:43.267 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:43.267 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:43.526 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:43.526 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:43.526 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:43.526 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:43.526 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:43.526 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:43.526 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:43.785 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:43.785 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:43.785 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:30:43.785 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:43.785 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:43.785 "name": "BaseBdev2", 00:30:43.785 "aliases": [ 00:30:43.785 "da8cdd7a-6123-4a37-9836-8ff863d0d3a6" 00:30:43.785 ], 00:30:43.785 "product_name": "Malloc disk", 00:30:43.785 "block_size": 512, 00:30:43.785 "num_blocks": 65536, 00:30:43.785 "uuid": "da8cdd7a-6123-4a37-9836-8ff863d0d3a6", 00:30:43.785 "assigned_rate_limits": { 00:30:43.785 "rw_ios_per_sec": 0, 00:30:43.785 "rw_mbytes_per_sec": 0, 00:30:43.785 "r_mbytes_per_sec": 0, 00:30:43.785 "w_mbytes_per_sec": 0 00:30:43.785 }, 00:30:43.785 "claimed": true, 00:30:43.785 "claim_type": "exclusive_write", 00:30:43.785 "zoned": false, 00:30:43.785 "supported_io_types": { 00:30:43.785 "read": true, 00:30:43.785 "write": true, 00:30:43.785 "unmap": true, 00:30:43.785 "write_zeroes": true, 00:30:43.785 "flush": true, 00:30:43.785 "reset": true, 00:30:43.785 "compare": false, 00:30:43.785 "compare_and_write": false, 00:30:43.785 "abort": true, 00:30:43.785 "nvme_admin": false, 00:30:43.785 "nvme_io": false 00:30:43.785 }, 00:30:43.785 "memory_domains": [ 00:30:43.785 { 00:30:43.785 "dma_device_id": "system", 00:30:43.785 "dma_device_type": 1 00:30:43.785 }, 00:30:43.785 { 00:30:43.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:43.785 "dma_device_type": 2 00:30:43.785 } 00:30:43.785 ], 00:30:43.785 "driver_specific": {} 00:30:43.785 }' 00:30:43.785 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:43.785 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:44.044 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.301 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.301 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:44.301 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:44.301 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:30:44.301 12:12:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:30:44.558 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:30:44.558 "name": "BaseBdev3", 00:30:44.558 "aliases": [ 00:30:44.558 "550b4561-4c4e-4949-8c47-b7bd0b43527c" 00:30:44.558 ], 00:30:44.558 "product_name": "Malloc disk", 00:30:44.558 "block_size": 512, 00:30:44.558 "num_blocks": 65536, 00:30:44.558 "uuid": "550b4561-4c4e-4949-8c47-b7bd0b43527c", 00:30:44.558 "assigned_rate_limits": { 00:30:44.558 "rw_ios_per_sec": 0, 00:30:44.558 "rw_mbytes_per_sec": 0, 00:30:44.558 "r_mbytes_per_sec": 0, 00:30:44.558 "w_mbytes_per_sec": 0 00:30:44.558 }, 00:30:44.558 "claimed": true, 00:30:44.558 "claim_type": "exclusive_write", 00:30:44.558 "zoned": false, 00:30:44.558 "supported_io_types": { 00:30:44.558 "read": true, 00:30:44.558 "write": true, 00:30:44.558 "unmap": true, 00:30:44.558 "write_zeroes": true, 00:30:44.558 "flush": true, 00:30:44.558 "reset": true, 00:30:44.558 "compare": false, 00:30:44.558 "compare_and_write": false, 00:30:44.558 "abort": true, 00:30:44.558 "nvme_admin": false, 00:30:44.558 "nvme_io": false 00:30:44.558 }, 00:30:44.558 "memory_domains": [ 00:30:44.558 { 00:30:44.558 "dma_device_id": "system", 00:30:44.558 "dma_device_type": 1 00:30:44.558 }, 00:30:44.558 { 00:30:44.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:44.558 "dma_device_type": 2 00:30:44.558 } 00:30:44.558 ], 00:30:44.558 "driver_specific": {} 00:30:44.558 }' 00:30:44.558 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:44.558 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:30:44.558 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:30:44.558 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:44.558 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:30:44.815 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:45.071 [2024-07-21 12:12:43.892403] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:45.071 12:12:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.328 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:45.328 "name": "Existed_Raid", 00:30:45.328 "uuid": "804c11f0-4631-45b9-99db-e3e6fed820c3", 00:30:45.328 "strip_size_kb": 64, 00:30:45.328 "state": "online", 00:30:45.328 "raid_level": "raid5f", 00:30:45.328 "superblock": true, 00:30:45.328 "num_base_bdevs": 3, 00:30:45.328 "num_base_bdevs_discovered": 2, 00:30:45.328 "num_base_bdevs_operational": 2, 00:30:45.328 "base_bdevs_list": [ 00:30:45.328 { 00:30:45.328 "name": null, 00:30:45.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.328 "is_configured": false, 00:30:45.328 "data_offset": 2048, 00:30:45.328 "data_size": 63488 00:30:45.328 }, 00:30:45.328 { 00:30:45.328 "name": "BaseBdev2", 00:30:45.328 "uuid": "da8cdd7a-6123-4a37-9836-8ff863d0d3a6", 00:30:45.328 "is_configured": true, 00:30:45.328 "data_offset": 2048, 00:30:45.328 "data_size": 63488 00:30:45.328 }, 00:30:45.328 { 00:30:45.328 "name": "BaseBdev3", 00:30:45.328 "uuid": "550b4561-4c4e-4949-8c47-b7bd0b43527c", 00:30:45.328 "is_configured": true, 00:30:45.328 "data_offset": 2048, 00:30:45.328 "data_size": 63488 00:30:45.328 } 00:30:45.328 ] 00:30:45.328 }' 00:30:45.328 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:45.328 12:12:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.893 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:30:45.893 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:45.893 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:45.893 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.151 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:46.151 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:46.151 12:12:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:30:46.408 [2024-07-21 12:12:45.214199] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:46.408 [2024-07-21 12:12:45.214506] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:46.408 [2024-07-21 12:12:45.226965] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:46.408 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:46.408 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:46.408 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.408 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:30:46.665 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:30:46.665 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:46.665 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:30:46.923 [2024-07-21 12:12:45.635142] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:46.923 [2024-07-21 12:12:45.635339] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:30:46.923 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:30:46.923 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:30:46.923 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:46.923 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:30:47.180 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:30:47.180 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:30:47.180 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:30:47.180 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:30:47.180 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:47.180 12:12:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:30:47.438 BaseBdev2 00:30:47.438 12:12:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:30:47.438 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:30:47.438 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:47.438 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:47.438 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:47.438 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:47.438 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:47.696 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:47.955 [ 00:30:47.955 { 00:30:47.955 "name": "BaseBdev2", 00:30:47.955 "aliases": [ 00:30:47.955 "509290e0-3b61-458b-a365-ca61be923aac" 00:30:47.955 ], 00:30:47.955 "product_name": "Malloc disk", 00:30:47.955 "block_size": 512, 00:30:47.955 "num_blocks": 65536, 00:30:47.955 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:47.955 "assigned_rate_limits": { 00:30:47.955 "rw_ios_per_sec": 0, 00:30:47.955 "rw_mbytes_per_sec": 0, 00:30:47.955 "r_mbytes_per_sec": 0, 00:30:47.955 "w_mbytes_per_sec": 0 00:30:47.955 }, 00:30:47.955 "claimed": false, 00:30:47.955 "zoned": false, 00:30:47.955 "supported_io_types": { 00:30:47.955 "read": true, 00:30:47.955 "write": true, 00:30:47.955 "unmap": true, 00:30:47.955 "write_zeroes": true, 00:30:47.955 "flush": true, 00:30:47.955 "reset": true, 00:30:47.955 "compare": false, 00:30:47.955 "compare_and_write": false, 00:30:47.955 "abort": true, 00:30:47.955 "nvme_admin": false, 00:30:47.955 "nvme_io": false 00:30:47.955 }, 00:30:47.955 "memory_domains": [ 00:30:47.955 { 00:30:47.955 "dma_device_id": "system", 00:30:47.955 "dma_device_type": 1 00:30:47.955 }, 00:30:47.955 { 00:30:47.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:47.955 "dma_device_type": 2 00:30:47.955 } 00:30:47.955 ], 00:30:47.955 "driver_specific": {} 00:30:47.955 } 00:30:47.955 ] 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:30:47.955 BaseBdev3 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:47.955 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:48.213 12:12:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:48.472 [ 00:30:48.472 { 00:30:48.472 "name": "BaseBdev3", 00:30:48.472 "aliases": [ 00:30:48.472 "ebb1ba0b-25f9-487c-a89e-5da024d905df" 00:30:48.472 ], 00:30:48.472 "product_name": "Malloc disk", 00:30:48.472 "block_size": 512, 00:30:48.472 "num_blocks": 65536, 00:30:48.472 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:48.472 "assigned_rate_limits": { 00:30:48.472 "rw_ios_per_sec": 0, 00:30:48.472 "rw_mbytes_per_sec": 0, 00:30:48.472 "r_mbytes_per_sec": 0, 00:30:48.472 "w_mbytes_per_sec": 0 00:30:48.472 }, 00:30:48.472 "claimed": false, 00:30:48.472 "zoned": false, 00:30:48.472 "supported_io_types": { 00:30:48.472 "read": true, 00:30:48.472 "write": true, 00:30:48.472 "unmap": true, 00:30:48.472 "write_zeroes": true, 00:30:48.472 "flush": true, 00:30:48.472 "reset": true, 00:30:48.472 "compare": false, 00:30:48.472 "compare_and_write": false, 00:30:48.472 "abort": true, 00:30:48.472 "nvme_admin": false, 00:30:48.472 "nvme_io": false 00:30:48.472 }, 00:30:48.472 "memory_domains": [ 00:30:48.472 { 00:30:48.472 "dma_device_id": "system", 00:30:48.472 "dma_device_type": 1 00:30:48.472 }, 00:30:48.472 { 00:30:48.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.472 "dma_device_type": 2 00:30:48.472 } 00:30:48.472 ], 00:30:48.472 "driver_specific": {} 00:30:48.472 } 00:30:48.472 ] 00:30:48.472 12:12:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:48.472 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:30:48.472 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:30:48.472 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:30:48.730 [2024-07-21 12:12:47.379328] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:48.730 [2024-07-21 12:12:47.379706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:48.730 [2024-07-21 12:12:47.379927] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:48.730 [2024-07-21 12:12:47.382163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:48.730 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:48.989 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:48.989 "name": "Existed_Raid", 00:30:48.989 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:48.989 "strip_size_kb": 64, 00:30:48.989 "state": "configuring", 00:30:48.989 "raid_level": "raid5f", 00:30:48.989 "superblock": true, 00:30:48.989 "num_base_bdevs": 3, 00:30:48.989 "num_base_bdevs_discovered": 2, 00:30:48.989 "num_base_bdevs_operational": 3, 00:30:48.989 "base_bdevs_list": [ 00:30:48.989 { 00:30:48.989 "name": "BaseBdev1", 00:30:48.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.989 "is_configured": false, 00:30:48.989 "data_offset": 0, 00:30:48.989 "data_size": 0 00:30:48.989 }, 00:30:48.989 { 00:30:48.989 "name": "BaseBdev2", 00:30:48.989 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:48.989 "is_configured": true, 00:30:48.989 "data_offset": 2048, 00:30:48.989 "data_size": 63488 00:30:48.989 }, 00:30:48.989 { 00:30:48.989 "name": "BaseBdev3", 00:30:48.989 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:48.989 "is_configured": true, 00:30:48.989 "data_offset": 2048, 00:30:48.989 "data_size": 63488 00:30:48.989 } 00:30:48.989 ] 00:30:48.989 }' 00:30:48.989 12:12:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:48.989 12:12:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:30:49.557 [2024-07-21 12:12:48.396358] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:49.557 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:49.822 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:49.822 "name": "Existed_Raid", 00:30:49.822 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:49.822 "strip_size_kb": 64, 00:30:49.822 "state": "configuring", 00:30:49.822 "raid_level": "raid5f", 00:30:49.822 "superblock": true, 00:30:49.822 "num_base_bdevs": 3, 00:30:49.822 "num_base_bdevs_discovered": 1, 00:30:49.822 "num_base_bdevs_operational": 3, 00:30:49.822 "base_bdevs_list": [ 00:30:49.822 { 00:30:49.822 "name": "BaseBdev1", 00:30:49.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.822 "is_configured": false, 00:30:49.822 "data_offset": 0, 00:30:49.822 "data_size": 0 00:30:49.822 }, 00:30:49.822 { 00:30:49.822 "name": null, 00:30:49.822 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:49.822 "is_configured": false, 00:30:49.822 "data_offset": 2048, 00:30:49.822 "data_size": 63488 00:30:49.822 }, 00:30:49.822 { 00:30:49.822 "name": "BaseBdev3", 00:30:49.822 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:49.822 "is_configured": true, 00:30:49.822 "data_offset": 2048, 00:30:49.822 "data_size": 63488 00:30:49.822 } 00:30:49.822 ] 00:30:49.822 }' 00:30:49.822 12:12:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:49.822 12:12:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.435 12:12:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:50.435 12:12:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:50.697 12:12:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:30:50.697 12:12:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:30:50.957 [2024-07-21 12:12:49.704187] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:50.957 BaseBdev1 00:30:50.957 12:12:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:30:50.957 12:12:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:30:50.957 12:12:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:50.957 12:12:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:50.957 12:12:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:50.957 12:12:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:50.957 12:12:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:51.214 12:12:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:51.472 [ 00:30:51.472 { 00:30:51.472 "name": "BaseBdev1", 00:30:51.472 "aliases": [ 00:30:51.472 "2cb378c9-1547-481b-b6f6-ec3c249eed1c" 00:30:51.472 ], 00:30:51.472 "product_name": "Malloc disk", 00:30:51.472 "block_size": 512, 00:30:51.472 "num_blocks": 65536, 00:30:51.472 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:51.472 "assigned_rate_limits": { 00:30:51.472 "rw_ios_per_sec": 0, 00:30:51.472 "rw_mbytes_per_sec": 0, 00:30:51.472 "r_mbytes_per_sec": 0, 00:30:51.472 "w_mbytes_per_sec": 0 00:30:51.472 }, 00:30:51.472 "claimed": true, 00:30:51.472 "claim_type": "exclusive_write", 00:30:51.472 "zoned": false, 00:30:51.472 "supported_io_types": { 00:30:51.472 "read": true, 00:30:51.472 "write": true, 00:30:51.472 "unmap": true, 00:30:51.472 "write_zeroes": true, 00:30:51.472 "flush": true, 00:30:51.472 "reset": true, 00:30:51.472 "compare": false, 00:30:51.472 "compare_and_write": false, 00:30:51.472 "abort": true, 00:30:51.472 "nvme_admin": false, 00:30:51.472 "nvme_io": false 00:30:51.472 }, 00:30:51.472 "memory_domains": [ 00:30:51.472 { 00:30:51.472 "dma_device_id": "system", 00:30:51.472 "dma_device_type": 1 00:30:51.472 }, 00:30:51.472 { 00:30:51.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:51.472 "dma_device_type": 2 00:30:51.472 } 00:30:51.472 ], 00:30:51.472 "driver_specific": {} 00:30:51.472 } 00:30:51.472 ] 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:51.472 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.730 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:51.730 "name": "Existed_Raid", 00:30:51.730 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:51.730 "strip_size_kb": 64, 00:30:51.730 "state": "configuring", 00:30:51.730 "raid_level": "raid5f", 00:30:51.730 "superblock": true, 00:30:51.730 "num_base_bdevs": 3, 00:30:51.730 "num_base_bdevs_discovered": 2, 00:30:51.730 "num_base_bdevs_operational": 3, 00:30:51.730 "base_bdevs_list": [ 00:30:51.730 { 00:30:51.730 "name": "BaseBdev1", 00:30:51.730 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:51.730 "is_configured": true, 00:30:51.730 "data_offset": 2048, 00:30:51.730 "data_size": 63488 00:30:51.730 }, 00:30:51.730 { 00:30:51.730 "name": null, 00:30:51.730 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:51.730 "is_configured": false, 00:30:51.730 "data_offset": 2048, 00:30:51.730 "data_size": 63488 00:30:51.730 }, 00:30:51.730 { 00:30:51.730 "name": "BaseBdev3", 00:30:51.730 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:51.730 "is_configured": true, 00:30:51.730 "data_offset": 2048, 00:30:51.730 "data_size": 63488 00:30:51.730 } 00:30:51.730 ] 00:30:51.730 }' 00:30:51.730 12:12:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:51.730 12:12:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.296 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:52.296 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.554 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:30:52.554 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:30:52.812 [2024-07-21 12:12:51.468586] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:52.812 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:53.070 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:53.070 "name": "Existed_Raid", 00:30:53.070 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:53.070 "strip_size_kb": 64, 00:30:53.070 "state": "configuring", 00:30:53.070 "raid_level": "raid5f", 00:30:53.070 "superblock": true, 00:30:53.070 "num_base_bdevs": 3, 00:30:53.070 "num_base_bdevs_discovered": 1, 00:30:53.070 "num_base_bdevs_operational": 3, 00:30:53.070 "base_bdevs_list": [ 00:30:53.070 { 00:30:53.070 "name": "BaseBdev1", 00:30:53.070 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:53.070 "is_configured": true, 00:30:53.070 "data_offset": 2048, 00:30:53.070 "data_size": 63488 00:30:53.070 }, 00:30:53.070 { 00:30:53.070 "name": null, 00:30:53.070 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:53.070 "is_configured": false, 00:30:53.070 "data_offset": 2048, 00:30:53.070 "data_size": 63488 00:30:53.070 }, 00:30:53.070 { 00:30:53.070 "name": null, 00:30:53.070 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:53.070 "is_configured": false, 00:30:53.070 "data_offset": 2048, 00:30:53.070 "data_size": 63488 00:30:53.070 } 00:30:53.070 ] 00:30:53.071 }' 00:30:53.071 12:12:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:53.071 12:12:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.637 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:53.637 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:53.896 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:30:53.896 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:54.154 [2024-07-21 12:12:52.776840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:54.154 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:54.154 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:54.154 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:54.154 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:54.155 "name": "Existed_Raid", 00:30:54.155 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:54.155 "strip_size_kb": 64, 00:30:54.155 "state": "configuring", 00:30:54.155 "raid_level": "raid5f", 00:30:54.155 "superblock": true, 00:30:54.155 "num_base_bdevs": 3, 00:30:54.155 "num_base_bdevs_discovered": 2, 00:30:54.155 "num_base_bdevs_operational": 3, 00:30:54.155 "base_bdevs_list": [ 00:30:54.155 { 00:30:54.155 "name": "BaseBdev1", 00:30:54.155 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:54.155 "is_configured": true, 00:30:54.155 "data_offset": 2048, 00:30:54.155 "data_size": 63488 00:30:54.155 }, 00:30:54.155 { 00:30:54.155 "name": null, 00:30:54.155 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:54.155 "is_configured": false, 00:30:54.155 "data_offset": 2048, 00:30:54.155 "data_size": 63488 00:30:54.155 }, 00:30:54.155 { 00:30:54.155 "name": "BaseBdev3", 00:30:54.155 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:54.155 "is_configured": true, 00:30:54.155 "data_offset": 2048, 00:30:54.155 "data_size": 63488 00:30:54.155 } 00:30:54.155 ] 00:30:54.155 }' 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:54.155 12:12:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.088 12:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.088 12:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:55.088 12:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:30:55.088 12:12:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:30:55.347 [2024-07-21 12:12:54.085153] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:55.347 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:55.605 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:55.605 "name": "Existed_Raid", 00:30:55.605 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:55.605 "strip_size_kb": 64, 00:30:55.605 "state": "configuring", 00:30:55.605 "raid_level": "raid5f", 00:30:55.605 "superblock": true, 00:30:55.605 "num_base_bdevs": 3, 00:30:55.605 "num_base_bdevs_discovered": 1, 00:30:55.605 "num_base_bdevs_operational": 3, 00:30:55.605 "base_bdevs_list": [ 00:30:55.605 { 00:30:55.605 "name": null, 00:30:55.605 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:55.605 "is_configured": false, 00:30:55.605 "data_offset": 2048, 00:30:55.605 "data_size": 63488 00:30:55.605 }, 00:30:55.605 { 00:30:55.605 "name": null, 00:30:55.605 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:55.605 "is_configured": false, 00:30:55.605 "data_offset": 2048, 00:30:55.605 "data_size": 63488 00:30:55.605 }, 00:30:55.605 { 00:30:55.605 "name": "BaseBdev3", 00:30:55.605 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:55.605 "is_configured": true, 00:30:55.605 "data_offset": 2048, 00:30:55.605 "data_size": 63488 00:30:55.605 } 00:30:55.605 ] 00:30:55.605 }' 00:30:55.605 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:55.605 12:12:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:56.170 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.170 12:12:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:56.427 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:30:56.427 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:56.684 [2024-07-21 12:12:55.341450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:56.684 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:56.942 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:56.942 "name": "Existed_Raid", 00:30:56.942 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:56.942 "strip_size_kb": 64, 00:30:56.942 "state": "configuring", 00:30:56.942 "raid_level": "raid5f", 00:30:56.942 "superblock": true, 00:30:56.942 "num_base_bdevs": 3, 00:30:56.942 "num_base_bdevs_discovered": 2, 00:30:56.942 "num_base_bdevs_operational": 3, 00:30:56.942 "base_bdevs_list": [ 00:30:56.942 { 00:30:56.942 "name": null, 00:30:56.942 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:56.942 "is_configured": false, 00:30:56.942 "data_offset": 2048, 00:30:56.942 "data_size": 63488 00:30:56.942 }, 00:30:56.942 { 00:30:56.942 "name": "BaseBdev2", 00:30:56.942 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:56.942 "is_configured": true, 00:30:56.942 "data_offset": 2048, 00:30:56.942 "data_size": 63488 00:30:56.942 }, 00:30:56.942 { 00:30:56.942 "name": "BaseBdev3", 00:30:56.942 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:56.942 "is_configured": true, 00:30:56.942 "data_offset": 2048, 00:30:56.942 "data_size": 63488 00:30:56.942 } 00:30:56.942 ] 00:30:56.942 }' 00:30:56.942 12:12:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:56.942 12:12:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:57.507 12:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.507 12:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:57.764 12:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:30:57.764 12:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:57.764 12:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:58.022 12:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2cb378c9-1547-481b-b6f6-ec3c249eed1c 00:30:58.281 [2024-07-21 12:12:56.956900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:58.281 [2024-07-21 12:12:56.957348] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:30:58.281 [2024-07-21 12:12:56.957471] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:30:58.281 [2024-07-21 12:12:56.957594] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:30:58.281 NewBaseBdev 00:30:58.281 [2024-07-21 12:12:56.958304] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:30:58.281 [2024-07-21 12:12:56.958478] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000008780 00:30:58.281 [2024-07-21 12:12:56.958741] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.281 12:12:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:30:58.281 12:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:30:58.281 12:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:30:58.281 12:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:30:58.281 12:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:30:58.281 12:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:30:58.281 12:12:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:30:58.538 12:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:58.538 [ 00:30:58.538 { 00:30:58.538 "name": "NewBaseBdev", 00:30:58.538 "aliases": [ 00:30:58.538 "2cb378c9-1547-481b-b6f6-ec3c249eed1c" 00:30:58.538 ], 00:30:58.538 "product_name": "Malloc disk", 00:30:58.538 "block_size": 512, 00:30:58.538 "num_blocks": 65536, 00:30:58.538 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:58.538 "assigned_rate_limits": { 00:30:58.538 "rw_ios_per_sec": 0, 00:30:58.538 "rw_mbytes_per_sec": 0, 00:30:58.538 "r_mbytes_per_sec": 0, 00:30:58.538 "w_mbytes_per_sec": 0 00:30:58.538 }, 00:30:58.538 "claimed": true, 00:30:58.538 "claim_type": "exclusive_write", 00:30:58.538 "zoned": false, 00:30:58.538 "supported_io_types": { 00:30:58.538 "read": true, 00:30:58.538 "write": true, 00:30:58.538 "unmap": true, 00:30:58.538 "write_zeroes": true, 00:30:58.538 "flush": true, 00:30:58.538 "reset": true, 00:30:58.538 "compare": false, 00:30:58.538 "compare_and_write": false, 00:30:58.538 "abort": true, 00:30:58.538 "nvme_admin": false, 00:30:58.538 "nvme_io": false 00:30:58.538 }, 00:30:58.538 "memory_domains": [ 00:30:58.538 { 00:30:58.538 "dma_device_id": "system", 00:30:58.538 "dma_device_type": 1 00:30:58.538 }, 00:30:58.538 { 00:30:58.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:58.538 "dma_device_type": 2 00:30:58.538 } 00:30:58.538 ], 00:30:58.538 "driver_specific": {} 00:30:58.538 } 00:30:58.538 ] 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:30:58.795 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:30:58.796 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:59.054 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:30:59.054 "name": "Existed_Raid", 00:30:59.054 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:59.054 "strip_size_kb": 64, 00:30:59.054 "state": "online", 00:30:59.054 "raid_level": "raid5f", 00:30:59.054 "superblock": true, 00:30:59.054 "num_base_bdevs": 3, 00:30:59.054 "num_base_bdevs_discovered": 3, 00:30:59.054 "num_base_bdevs_operational": 3, 00:30:59.054 "base_bdevs_list": [ 00:30:59.054 { 00:30:59.054 "name": "NewBaseBdev", 00:30:59.054 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:59.054 "is_configured": true, 00:30:59.054 "data_offset": 2048, 00:30:59.054 "data_size": 63488 00:30:59.054 }, 00:30:59.054 { 00:30:59.054 "name": "BaseBdev2", 00:30:59.054 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:59.054 "is_configured": true, 00:30:59.054 "data_offset": 2048, 00:30:59.054 "data_size": 63488 00:30:59.054 }, 00:30:59.054 { 00:30:59.054 "name": "BaseBdev3", 00:30:59.054 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:59.054 "is_configured": true, 00:30:59.054 "data_offset": 2048, 00:30:59.054 "data_size": 63488 00:30:59.054 } 00:30:59.054 ] 00:30:59.054 }' 00:30:59.054 12:12:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:30:59.054 12:12:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:30:59.620 [2024-07-21 12:12:58.465467] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:30:59.620 "name": "Existed_Raid", 00:30:59.620 "aliases": [ 00:30:59.620 "e2362d03-b076-4973-a9b3-fba38ec605ae" 00:30:59.620 ], 00:30:59.620 "product_name": "Raid Volume", 00:30:59.620 "block_size": 512, 00:30:59.620 "num_blocks": 126976, 00:30:59.620 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:59.620 "assigned_rate_limits": { 00:30:59.620 "rw_ios_per_sec": 0, 00:30:59.620 "rw_mbytes_per_sec": 0, 00:30:59.620 "r_mbytes_per_sec": 0, 00:30:59.620 "w_mbytes_per_sec": 0 00:30:59.620 }, 00:30:59.620 "claimed": false, 00:30:59.620 "zoned": false, 00:30:59.620 "supported_io_types": { 00:30:59.620 "read": true, 00:30:59.620 "write": true, 00:30:59.620 "unmap": false, 00:30:59.620 "write_zeroes": true, 00:30:59.620 "flush": false, 00:30:59.620 "reset": true, 00:30:59.620 "compare": false, 00:30:59.620 "compare_and_write": false, 00:30:59.620 "abort": false, 00:30:59.620 "nvme_admin": false, 00:30:59.620 "nvme_io": false 00:30:59.620 }, 00:30:59.620 "driver_specific": { 00:30:59.620 "raid": { 00:30:59.620 "uuid": "e2362d03-b076-4973-a9b3-fba38ec605ae", 00:30:59.620 "strip_size_kb": 64, 00:30:59.620 "state": "online", 00:30:59.620 "raid_level": "raid5f", 00:30:59.620 "superblock": true, 00:30:59.620 "num_base_bdevs": 3, 00:30:59.620 "num_base_bdevs_discovered": 3, 00:30:59.620 "num_base_bdevs_operational": 3, 00:30:59.620 "base_bdevs_list": [ 00:30:59.620 { 00:30:59.620 "name": "NewBaseBdev", 00:30:59.620 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:30:59.620 "is_configured": true, 00:30:59.620 "data_offset": 2048, 00:30:59.620 "data_size": 63488 00:30:59.620 }, 00:30:59.620 { 00:30:59.620 "name": "BaseBdev2", 00:30:59.620 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:30:59.620 "is_configured": true, 00:30:59.620 "data_offset": 2048, 00:30:59.620 "data_size": 63488 00:30:59.620 }, 00:30:59.620 { 00:30:59.620 "name": "BaseBdev3", 00:30:59.620 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:30:59.620 "is_configured": true, 00:30:59.620 "data_offset": 2048, 00:30:59.620 "data_size": 63488 00:30:59.620 } 00:30:59.620 ] 00:30:59.620 } 00:30:59.620 } 00:30:59.620 }' 00:30:59.620 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:59.878 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:30:59.878 BaseBdev2 00:30:59.878 BaseBdev3' 00:30:59.878 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:30:59.878 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:30:59.878 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:00.137 "name": "NewBaseBdev", 00:31:00.137 "aliases": [ 00:31:00.137 "2cb378c9-1547-481b-b6f6-ec3c249eed1c" 00:31:00.137 ], 00:31:00.137 "product_name": "Malloc disk", 00:31:00.137 "block_size": 512, 00:31:00.137 "num_blocks": 65536, 00:31:00.137 "uuid": "2cb378c9-1547-481b-b6f6-ec3c249eed1c", 00:31:00.137 "assigned_rate_limits": { 00:31:00.137 "rw_ios_per_sec": 0, 00:31:00.137 "rw_mbytes_per_sec": 0, 00:31:00.137 "r_mbytes_per_sec": 0, 00:31:00.137 "w_mbytes_per_sec": 0 00:31:00.137 }, 00:31:00.137 "claimed": true, 00:31:00.137 "claim_type": "exclusive_write", 00:31:00.137 "zoned": false, 00:31:00.137 "supported_io_types": { 00:31:00.137 "read": true, 00:31:00.137 "write": true, 00:31:00.137 "unmap": true, 00:31:00.137 "write_zeroes": true, 00:31:00.137 "flush": true, 00:31:00.137 "reset": true, 00:31:00.137 "compare": false, 00:31:00.137 "compare_and_write": false, 00:31:00.137 "abort": true, 00:31:00.137 "nvme_admin": false, 00:31:00.137 "nvme_io": false 00:31:00.137 }, 00:31:00.137 "memory_domains": [ 00:31:00.137 { 00:31:00.137 "dma_device_id": "system", 00:31:00.137 "dma_device_type": 1 00:31:00.137 }, 00:31:00.137 { 00:31:00.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:00.137 "dma_device_type": 2 00:31:00.137 } 00:31:00.137 ], 00:31:00.137 "driver_specific": {} 00:31:00.137 }' 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:00.137 12:12:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:31:00.395 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:00.654 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:00.654 "name": "BaseBdev2", 00:31:00.654 "aliases": [ 00:31:00.654 "509290e0-3b61-458b-a365-ca61be923aac" 00:31:00.654 ], 00:31:00.654 "product_name": "Malloc disk", 00:31:00.654 "block_size": 512, 00:31:00.654 "num_blocks": 65536, 00:31:00.654 "uuid": "509290e0-3b61-458b-a365-ca61be923aac", 00:31:00.654 "assigned_rate_limits": { 00:31:00.654 "rw_ios_per_sec": 0, 00:31:00.654 "rw_mbytes_per_sec": 0, 00:31:00.654 "r_mbytes_per_sec": 0, 00:31:00.654 "w_mbytes_per_sec": 0 00:31:00.654 }, 00:31:00.654 "claimed": true, 00:31:00.654 "claim_type": "exclusive_write", 00:31:00.654 "zoned": false, 00:31:00.654 "supported_io_types": { 00:31:00.654 "read": true, 00:31:00.654 "write": true, 00:31:00.654 "unmap": true, 00:31:00.654 "write_zeroes": true, 00:31:00.654 "flush": true, 00:31:00.654 "reset": true, 00:31:00.654 "compare": false, 00:31:00.654 "compare_and_write": false, 00:31:00.654 "abort": true, 00:31:00.654 "nvme_admin": false, 00:31:00.654 "nvme_io": false 00:31:00.654 }, 00:31:00.654 "memory_domains": [ 00:31:00.654 { 00:31:00.654 "dma_device_id": "system", 00:31:00.654 "dma_device_type": 1 00:31:00.654 }, 00:31:00.654 { 00:31:00.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:00.654 "dma_device_type": 2 00:31:00.654 } 00:31:00.654 ], 00:31:00.654 "driver_specific": {} 00:31:00.654 }' 00:31:00.654 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:00.654 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:00.654 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:00.654 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:00.912 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:00.912 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:00.912 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:00.912 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:00.912 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:00.912 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:00.912 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:01.170 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:01.170 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:01.170 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:31:01.170 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:01.170 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:01.170 "name": "BaseBdev3", 00:31:01.170 "aliases": [ 00:31:01.170 "ebb1ba0b-25f9-487c-a89e-5da024d905df" 00:31:01.170 ], 00:31:01.170 "product_name": "Malloc disk", 00:31:01.170 "block_size": 512, 00:31:01.170 "num_blocks": 65536, 00:31:01.170 "uuid": "ebb1ba0b-25f9-487c-a89e-5da024d905df", 00:31:01.170 "assigned_rate_limits": { 00:31:01.170 "rw_ios_per_sec": 0, 00:31:01.170 "rw_mbytes_per_sec": 0, 00:31:01.170 "r_mbytes_per_sec": 0, 00:31:01.170 "w_mbytes_per_sec": 0 00:31:01.170 }, 00:31:01.170 "claimed": true, 00:31:01.170 "claim_type": "exclusive_write", 00:31:01.170 "zoned": false, 00:31:01.170 "supported_io_types": { 00:31:01.170 "read": true, 00:31:01.170 "write": true, 00:31:01.170 "unmap": true, 00:31:01.170 "write_zeroes": true, 00:31:01.170 "flush": true, 00:31:01.170 "reset": true, 00:31:01.170 "compare": false, 00:31:01.170 "compare_and_write": false, 00:31:01.170 "abort": true, 00:31:01.170 "nvme_admin": false, 00:31:01.170 "nvme_io": false 00:31:01.170 }, 00:31:01.170 "memory_domains": [ 00:31:01.170 { 00:31:01.170 "dma_device_id": "system", 00:31:01.170 "dma_device_type": 1 00:31:01.170 }, 00:31:01.170 { 00:31:01.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:01.170 "dma_device_type": 2 00:31:01.170 } 00:31:01.170 ], 00:31:01.170 "driver_specific": {} 00:31:01.170 }' 00:31:01.170 12:12:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:01.428 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:01.428 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:01.428 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:01.428 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:01.428 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:01.428 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:01.428 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:01.685 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:01.685 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:01.685 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:01.685 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:01.685 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:31:01.943 [2024-07-21 12:13:00.581726] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:31:01.943 [2024-07-21 12:13:00.581866] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:01.943 [2024-07-21 12:13:00.582032] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:01.943 [2024-07-21 12:13:00.582417] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:01.943 [2024-07-21 12:13:00.582531] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name Existed_Raid, state offline 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 161204 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 161204 ']' 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 161204 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 161204 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 161204' 00:31:01.943 killing process with pid 161204 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 161204 00:31:01.943 [2024-07-21 12:13:00.624034] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:01.943 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 161204 00:31:01.943 [2024-07-21 12:13:00.658213] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:02.200 12:13:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:31:02.200 00:31:02.200 real 0m28.093s 00:31:02.200 user 0m53.221s 00:31:02.200 sys 0m3.561s 00:31:02.200 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:02.200 12:13:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:02.200 ************************************ 00:31:02.200 END TEST raid5f_state_function_test_sb 00:31:02.200 ************************************ 00:31:02.200 12:13:00 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:31:02.200 12:13:00 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:31:02.201 12:13:00 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:02.201 12:13:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:02.201 ************************************ 00:31:02.201 START TEST raid5f_superblock_test 00:31:02.201 ************************************ 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 3 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=162146 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 162146 /var/tmp/spdk-raid.sock 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 162146 ']' 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:02.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:02.201 12:13:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.201 [2024-07-21 12:13:01.056876] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:02.201 [2024-07-21 12:13:01.057270] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162146 ] 00:31:02.458 [2024-07-21 12:13:01.224316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:02.458 [2024-07-21 12:13:01.300469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:02.716 [2024-07-21 12:13:01.373025] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:03.280 12:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:03.280 12:13:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:31:03.280 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:31:03.280 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:03.280 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:31:03.280 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:31:03.281 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:31:03.281 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:03.281 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:03.281 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:03.281 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:31:03.538 malloc1 00:31:03.538 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:03.795 [2024-07-21 12:13:02.513318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:03.795 [2024-07-21 12:13:02.513579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:03.795 [2024-07-21 12:13:02.513765] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:31:03.795 [2024-07-21 12:13:02.513921] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:03.795 [2024-07-21 12:13:02.516437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:03.795 [2024-07-21 12:13:02.516624] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:03.795 pt1 00:31:03.795 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:03.796 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:31:04.065 malloc2 00:31:04.065 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:04.322 [2024-07-21 12:13:02.982653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:04.322 [2024-07-21 12:13:02.982867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:04.322 [2024-07-21 12:13:02.982965] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:04.322 [2024-07-21 12:13:02.983368] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:04.322 [2024-07-21 12:13:02.985909] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:04.322 [2024-07-21 12:13:02.986077] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:04.322 pt2 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:31:04.322 12:13:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:31:04.580 malloc3 00:31:04.580 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:04.580 [2024-07-21 12:13:03.402393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:04.580 [2024-07-21 12:13:03.402608] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:04.580 [2024-07-21 12:13:03.402693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:31:04.580 [2024-07-21 12:13:03.403015] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:04.580 [2024-07-21 12:13:03.405577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:04.580 [2024-07-21 12:13:03.405774] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:04.580 pt3 00:31:04.580 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:31:04.580 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:31:04.580 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:31:04.838 [2024-07-21 12:13:03.654554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:04.838 [2024-07-21 12:13:03.656828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:04.838 [2024-07-21 12:13:03.657071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:04.838 [2024-07-21 12:13:03.657418] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008780 00:31:04.838 [2024-07-21 12:13:03.657550] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:04.838 [2024-07-21 12:13:03.657799] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:31:04.838 [2024-07-21 12:13:03.658748] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008780 00:31:04.838 [2024-07-21 12:13:03.658897] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008780 00:31:04.838 [2024-07-21 12:13:03.659176] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:04.838 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.096 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:05.096 "name": "raid_bdev1", 00:31:05.096 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:05.096 "strip_size_kb": 64, 00:31:05.096 "state": "online", 00:31:05.096 "raid_level": "raid5f", 00:31:05.096 "superblock": true, 00:31:05.096 "num_base_bdevs": 3, 00:31:05.096 "num_base_bdevs_discovered": 3, 00:31:05.096 "num_base_bdevs_operational": 3, 00:31:05.096 "base_bdevs_list": [ 00:31:05.096 { 00:31:05.096 "name": "pt1", 00:31:05.096 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:05.096 "is_configured": true, 00:31:05.096 "data_offset": 2048, 00:31:05.096 "data_size": 63488 00:31:05.096 }, 00:31:05.096 { 00:31:05.096 "name": "pt2", 00:31:05.096 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:05.096 "is_configured": true, 00:31:05.096 "data_offset": 2048, 00:31:05.096 "data_size": 63488 00:31:05.096 }, 00:31:05.096 { 00:31:05.096 "name": "pt3", 00:31:05.096 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:05.096 "is_configured": true, 00:31:05.096 "data_offset": 2048, 00:31:05.096 "data_size": 63488 00:31:05.096 } 00:31:05.096 ] 00:31:05.096 }' 00:31:05.096 12:13:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:05.096 12:13:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:05.663 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:05.921 [2024-07-21 12:13:04.767444] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:05.921 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:05.921 "name": "raid_bdev1", 00:31:05.921 "aliases": [ 00:31:05.921 "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946" 00:31:05.921 ], 00:31:05.921 "product_name": "Raid Volume", 00:31:05.921 "block_size": 512, 00:31:05.921 "num_blocks": 126976, 00:31:05.921 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:05.921 "assigned_rate_limits": { 00:31:05.921 "rw_ios_per_sec": 0, 00:31:05.921 "rw_mbytes_per_sec": 0, 00:31:05.921 "r_mbytes_per_sec": 0, 00:31:05.921 "w_mbytes_per_sec": 0 00:31:05.921 }, 00:31:05.921 "claimed": false, 00:31:05.921 "zoned": false, 00:31:05.921 "supported_io_types": { 00:31:05.921 "read": true, 00:31:05.921 "write": true, 00:31:05.921 "unmap": false, 00:31:05.921 "write_zeroes": true, 00:31:05.921 "flush": false, 00:31:05.921 "reset": true, 00:31:05.921 "compare": false, 00:31:05.921 "compare_and_write": false, 00:31:05.921 "abort": false, 00:31:05.921 "nvme_admin": false, 00:31:05.921 "nvme_io": false 00:31:05.921 }, 00:31:05.921 "driver_specific": { 00:31:05.921 "raid": { 00:31:05.921 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:05.921 "strip_size_kb": 64, 00:31:05.921 "state": "online", 00:31:05.921 "raid_level": "raid5f", 00:31:05.921 "superblock": true, 00:31:05.921 "num_base_bdevs": 3, 00:31:05.921 "num_base_bdevs_discovered": 3, 00:31:05.921 "num_base_bdevs_operational": 3, 00:31:05.921 "base_bdevs_list": [ 00:31:05.921 { 00:31:05.921 "name": "pt1", 00:31:05.921 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:05.921 "is_configured": true, 00:31:05.921 "data_offset": 2048, 00:31:05.921 "data_size": 63488 00:31:05.921 }, 00:31:05.921 { 00:31:05.921 "name": "pt2", 00:31:05.921 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:05.921 "is_configured": true, 00:31:05.921 "data_offset": 2048, 00:31:05.921 "data_size": 63488 00:31:05.921 }, 00:31:05.921 { 00:31:05.921 "name": "pt3", 00:31:05.921 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:05.921 "is_configured": true, 00:31:05.921 "data_offset": 2048, 00:31:05.921 "data_size": 63488 00:31:05.921 } 00:31:05.921 ] 00:31:05.921 } 00:31:05.921 } 00:31:05.921 }' 00:31:06.179 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:06.179 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:06.179 pt2 00:31:06.179 pt3' 00:31:06.179 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:06.179 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:06.179 12:13:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:06.437 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:06.437 "name": "pt1", 00:31:06.438 "aliases": [ 00:31:06.438 "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7" 00:31:06.438 ], 00:31:06.438 "product_name": "passthru", 00:31:06.438 "block_size": 512, 00:31:06.438 "num_blocks": 65536, 00:31:06.438 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:06.438 "assigned_rate_limits": { 00:31:06.438 "rw_ios_per_sec": 0, 00:31:06.438 "rw_mbytes_per_sec": 0, 00:31:06.438 "r_mbytes_per_sec": 0, 00:31:06.438 "w_mbytes_per_sec": 0 00:31:06.438 }, 00:31:06.438 "claimed": true, 00:31:06.438 "claim_type": "exclusive_write", 00:31:06.438 "zoned": false, 00:31:06.438 "supported_io_types": { 00:31:06.438 "read": true, 00:31:06.438 "write": true, 00:31:06.438 "unmap": true, 00:31:06.438 "write_zeroes": true, 00:31:06.438 "flush": true, 00:31:06.438 "reset": true, 00:31:06.438 "compare": false, 00:31:06.438 "compare_and_write": false, 00:31:06.438 "abort": true, 00:31:06.438 "nvme_admin": false, 00:31:06.438 "nvme_io": false 00:31:06.438 }, 00:31:06.438 "memory_domains": [ 00:31:06.438 { 00:31:06.438 "dma_device_id": "system", 00:31:06.438 "dma_device_type": 1 00:31:06.438 }, 00:31:06.438 { 00:31:06.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:06.438 "dma_device_type": 2 00:31:06.438 } 00:31:06.438 ], 00:31:06.438 "driver_specific": { 00:31:06.438 "passthru": { 00:31:06.438 "name": "pt1", 00:31:06.438 "base_bdev_name": "malloc1" 00:31:06.438 } 00:31:06.438 } 00:31:06.438 }' 00:31:06.438 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:06.438 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:06.438 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:06.438 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:06.438 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:06.438 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:06.438 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:06.696 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:06.955 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:06.955 "name": "pt2", 00:31:06.955 "aliases": [ 00:31:06.955 "eba1ef94-af13-5eb7-b609-b656c013664c" 00:31:06.955 ], 00:31:06.955 "product_name": "passthru", 00:31:06.955 "block_size": 512, 00:31:06.955 "num_blocks": 65536, 00:31:06.955 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:06.955 "assigned_rate_limits": { 00:31:06.955 "rw_ios_per_sec": 0, 00:31:06.955 "rw_mbytes_per_sec": 0, 00:31:06.955 "r_mbytes_per_sec": 0, 00:31:06.955 "w_mbytes_per_sec": 0 00:31:06.955 }, 00:31:06.955 "claimed": true, 00:31:06.955 "claim_type": "exclusive_write", 00:31:06.955 "zoned": false, 00:31:06.955 "supported_io_types": { 00:31:06.955 "read": true, 00:31:06.955 "write": true, 00:31:06.955 "unmap": true, 00:31:06.955 "write_zeroes": true, 00:31:06.955 "flush": true, 00:31:06.955 "reset": true, 00:31:06.955 "compare": false, 00:31:06.955 "compare_and_write": false, 00:31:06.955 "abort": true, 00:31:06.955 "nvme_admin": false, 00:31:06.955 "nvme_io": false 00:31:06.955 }, 00:31:06.955 "memory_domains": [ 00:31:06.955 { 00:31:06.955 "dma_device_id": "system", 00:31:06.955 "dma_device_type": 1 00:31:06.955 }, 00:31:06.955 { 00:31:06.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:06.955 "dma_device_type": 2 00:31:06.955 } 00:31:06.955 ], 00:31:06.955 "driver_specific": { 00:31:06.955 "passthru": { 00:31:06.955 "name": "pt2", 00:31:06.955 "base_bdev_name": "malloc2" 00:31:06.955 } 00:31:06.955 } 00:31:06.955 }' 00:31:06.955 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:06.955 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:07.214 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:07.214 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:07.214 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:07.214 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:07.214 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:07.214 12:13:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:07.214 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:07.214 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:07.214 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:07.472 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:07.472 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:07.472 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:07.472 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:07.731 "name": "pt3", 00:31:07.731 "aliases": [ 00:31:07.731 "c544a070-d418-52f6-8021-b2388e0dcae8" 00:31:07.731 ], 00:31:07.731 "product_name": "passthru", 00:31:07.731 "block_size": 512, 00:31:07.731 "num_blocks": 65536, 00:31:07.731 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:07.731 "assigned_rate_limits": { 00:31:07.731 "rw_ios_per_sec": 0, 00:31:07.731 "rw_mbytes_per_sec": 0, 00:31:07.731 "r_mbytes_per_sec": 0, 00:31:07.731 "w_mbytes_per_sec": 0 00:31:07.731 }, 00:31:07.731 "claimed": true, 00:31:07.731 "claim_type": "exclusive_write", 00:31:07.731 "zoned": false, 00:31:07.731 "supported_io_types": { 00:31:07.731 "read": true, 00:31:07.731 "write": true, 00:31:07.731 "unmap": true, 00:31:07.731 "write_zeroes": true, 00:31:07.731 "flush": true, 00:31:07.731 "reset": true, 00:31:07.731 "compare": false, 00:31:07.731 "compare_and_write": false, 00:31:07.731 "abort": true, 00:31:07.731 "nvme_admin": false, 00:31:07.731 "nvme_io": false 00:31:07.731 }, 00:31:07.731 "memory_domains": [ 00:31:07.731 { 00:31:07.731 "dma_device_id": "system", 00:31:07.731 "dma_device_type": 1 00:31:07.731 }, 00:31:07.731 { 00:31:07.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:07.731 "dma_device_type": 2 00:31:07.731 } 00:31:07.731 ], 00:31:07.731 "driver_specific": { 00:31:07.731 "passthru": { 00:31:07.731 "name": "pt3", 00:31:07.731 "base_bdev_name": "malloc3" 00:31:07.731 } 00:31:07.731 } 00:31:07.731 }' 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:07.731 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:07.990 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:07.990 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:07.990 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:07.990 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:07.990 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:07.990 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:07.990 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:31:08.248 [2024-07-21 12:13:06.971984] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:08.248 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f3fbda6f-1a3c-4de8-8aa4-88326b3fa946 00:31:08.248 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f3fbda6f-1a3c-4de8-8aa4-88326b3fa946 ']' 00:31:08.248 12:13:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:08.507 [2024-07-21 12:13:07.227892] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:08.507 [2024-07-21 12:13:07.228282] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:08.507 [2024-07-21 12:13:07.228550] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:08.507 [2024-07-21 12:13:07.228755] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:08.507 [2024-07-21 12:13:07.228861] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008780 name raid_bdev1, state offline 00:31:08.507 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:08.507 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:31:08.765 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:31:08.765 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:31:08.765 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:08.765 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:09.024 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:09.024 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:09.024 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:31:09.024 12:13:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:09.283 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:31:09.283 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:31:09.542 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:31:09.801 [2024-07-21 12:13:08.547991] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:31:09.801 [2024-07-21 12:13:08.550283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:31:09.801 [2024-07-21 12:13:08.550470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:31:09.801 [2024-07-21 12:13:08.550569] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:31:09.801 [2024-07-21 12:13:08.550955] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:31:09.801 [2024-07-21 12:13:08.551162] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:31:09.801 [2024-07-21 12:13:08.551322] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:09.801 [2024-07-21 12:13:08.551364] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state configuring 00:31:09.801 request: 00:31:09.801 { 00:31:09.801 "name": "raid_bdev1", 00:31:09.801 "raid_level": "raid5f", 00:31:09.801 "base_bdevs": [ 00:31:09.801 "malloc1", 00:31:09.801 "malloc2", 00:31:09.801 "malloc3" 00:31:09.801 ], 00:31:09.801 "superblock": false, 00:31:09.801 "strip_size_kb": 64, 00:31:09.801 "method": "bdev_raid_create", 00:31:09.801 "req_id": 1 00:31:09.801 } 00:31:09.801 Got JSON-RPC error response 00:31:09.801 response: 00:31:09.801 { 00:31:09.801 "code": -17, 00:31:09.801 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:31:09.801 } 00:31:09.801 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:31:09.801 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:09.801 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:09.801 12:13:08 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:09.801 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:09.801 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:31:10.060 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:31:10.060 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:31:10.060 12:13:08 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:10.332 [2024-07-21 12:13:09.028092] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:10.332 [2024-07-21 12:13:09.028301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.332 [2024-07-21 12:13:09.028380] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:10.332 [2024-07-21 12:13:09.028504] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.332 [2024-07-21 12:13:09.031092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.332 [2024-07-21 12:13:09.031261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:10.332 [2024-07-21 12:13:09.031459] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:10.332 [2024-07-21 12:13:09.031644] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:10.332 pt1 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:10.332 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.622 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:10.622 "name": "raid_bdev1", 00:31:10.622 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:10.622 "strip_size_kb": 64, 00:31:10.622 "state": "configuring", 00:31:10.622 "raid_level": "raid5f", 00:31:10.622 "superblock": true, 00:31:10.622 "num_base_bdevs": 3, 00:31:10.622 "num_base_bdevs_discovered": 1, 00:31:10.622 "num_base_bdevs_operational": 3, 00:31:10.622 "base_bdevs_list": [ 00:31:10.622 { 00:31:10.622 "name": "pt1", 00:31:10.622 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:10.622 "is_configured": true, 00:31:10.622 "data_offset": 2048, 00:31:10.622 "data_size": 63488 00:31:10.622 }, 00:31:10.622 { 00:31:10.622 "name": null, 00:31:10.622 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:10.622 "is_configured": false, 00:31:10.622 "data_offset": 2048, 00:31:10.622 "data_size": 63488 00:31:10.622 }, 00:31:10.622 { 00:31:10.622 "name": null, 00:31:10.622 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:10.622 "is_configured": false, 00:31:10.622 "data_offset": 2048, 00:31:10.622 "data_size": 63488 00:31:10.622 } 00:31:10.622 ] 00:31:10.622 }' 00:31:10.622 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:10.622 12:13:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.196 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:31:11.196 12:13:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:11.455 [2024-07-21 12:13:10.120338] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:11.455 [2024-07-21 12:13:10.120623] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:11.455 [2024-07-21 12:13:10.120710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:31:11.455 [2024-07-21 12:13:10.120840] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:11.455 [2024-07-21 12:13:10.121437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:11.455 [2024-07-21 12:13:10.121601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:11.455 [2024-07-21 12:13:10.121809] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:11.455 [2024-07-21 12:13:10.121941] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:11.455 pt2 00:31:11.455 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:11.713 [2024-07-21 12:13:10.384370] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:11.713 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:11.714 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:11.714 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.972 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:11.972 "name": "raid_bdev1", 00:31:11.972 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:11.972 "strip_size_kb": 64, 00:31:11.972 "state": "configuring", 00:31:11.972 "raid_level": "raid5f", 00:31:11.972 "superblock": true, 00:31:11.972 "num_base_bdevs": 3, 00:31:11.972 "num_base_bdevs_discovered": 1, 00:31:11.972 "num_base_bdevs_operational": 3, 00:31:11.972 "base_bdevs_list": [ 00:31:11.972 { 00:31:11.972 "name": "pt1", 00:31:11.972 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:11.972 "is_configured": true, 00:31:11.972 "data_offset": 2048, 00:31:11.972 "data_size": 63488 00:31:11.972 }, 00:31:11.972 { 00:31:11.972 "name": null, 00:31:11.972 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:11.972 "is_configured": false, 00:31:11.972 "data_offset": 2048, 00:31:11.972 "data_size": 63488 00:31:11.972 }, 00:31:11.972 { 00:31:11.972 "name": null, 00:31:11.972 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:11.972 "is_configured": false, 00:31:11.972 "data_offset": 2048, 00:31:11.972 "data_size": 63488 00:31:11.972 } 00:31:11.972 ] 00:31:11.972 }' 00:31:11.972 12:13:10 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:11.972 12:13:10 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.537 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:31:12.537 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:12.537 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:12.795 [2024-07-21 12:13:11.544567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:12.795 [2024-07-21 12:13:11.544849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:12.795 [2024-07-21 12:13:11.544958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:31:12.795 [2024-07-21 12:13:11.545115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:12.795 [2024-07-21 12:13:11.545655] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:12.795 [2024-07-21 12:13:11.545819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:12.795 [2024-07-21 12:13:11.546037] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:12.796 [2024-07-21 12:13:11.546168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:12.796 pt2 00:31:12.796 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:12.796 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:12.796 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:13.054 [2024-07-21 12:13:11.824637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:13.054 [2024-07-21 12:13:11.824860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:13.054 [2024-07-21 12:13:11.824961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:13.054 [2024-07-21 12:13:11.825266] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:13.054 [2024-07-21 12:13:11.825794] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:13.054 [2024-07-21 12:13:11.825986] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:13.054 [2024-07-21 12:13:11.826197] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:13.054 [2024-07-21 12:13:11.826326] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:13.054 [2024-07-21 12:13:11.826590] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:31:13.054 [2024-07-21 12:13:11.826716] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:13.054 [2024-07-21 12:13:11.826842] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:31:13.054 [2024-07-21 12:13:11.827540] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:31:13.054 [2024-07-21 12:13:11.827679] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:31:13.054 [2024-07-21 12:13:11.827897] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:13.054 pt3 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:13.054 12:13:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:13.313 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:13.313 "name": "raid_bdev1", 00:31:13.313 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:13.313 "strip_size_kb": 64, 00:31:13.313 "state": "online", 00:31:13.313 "raid_level": "raid5f", 00:31:13.313 "superblock": true, 00:31:13.313 "num_base_bdevs": 3, 00:31:13.313 "num_base_bdevs_discovered": 3, 00:31:13.313 "num_base_bdevs_operational": 3, 00:31:13.313 "base_bdevs_list": [ 00:31:13.313 { 00:31:13.313 "name": "pt1", 00:31:13.313 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:13.313 "is_configured": true, 00:31:13.313 "data_offset": 2048, 00:31:13.313 "data_size": 63488 00:31:13.313 }, 00:31:13.313 { 00:31:13.313 "name": "pt2", 00:31:13.313 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:13.313 "is_configured": true, 00:31:13.313 "data_offset": 2048, 00:31:13.313 "data_size": 63488 00:31:13.313 }, 00:31:13.313 { 00:31:13.313 "name": "pt3", 00:31:13.313 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:13.313 "is_configured": true, 00:31:13.313 "data_offset": 2048, 00:31:13.313 "data_size": 63488 00:31:13.313 } 00:31:13.313 ] 00:31:13.313 }' 00:31:13.313 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:13.313 12:13:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:13.879 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:31:14.137 [2024-07-21 12:13:12.814270] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:14.137 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:31:14.137 "name": "raid_bdev1", 00:31:14.137 "aliases": [ 00:31:14.138 "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946" 00:31:14.138 ], 00:31:14.138 "product_name": "Raid Volume", 00:31:14.138 "block_size": 512, 00:31:14.138 "num_blocks": 126976, 00:31:14.138 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:14.138 "assigned_rate_limits": { 00:31:14.138 "rw_ios_per_sec": 0, 00:31:14.138 "rw_mbytes_per_sec": 0, 00:31:14.138 "r_mbytes_per_sec": 0, 00:31:14.138 "w_mbytes_per_sec": 0 00:31:14.138 }, 00:31:14.138 "claimed": false, 00:31:14.138 "zoned": false, 00:31:14.138 "supported_io_types": { 00:31:14.138 "read": true, 00:31:14.138 "write": true, 00:31:14.138 "unmap": false, 00:31:14.138 "write_zeroes": true, 00:31:14.138 "flush": false, 00:31:14.138 "reset": true, 00:31:14.138 "compare": false, 00:31:14.138 "compare_and_write": false, 00:31:14.138 "abort": false, 00:31:14.138 "nvme_admin": false, 00:31:14.138 "nvme_io": false 00:31:14.138 }, 00:31:14.138 "driver_specific": { 00:31:14.138 "raid": { 00:31:14.138 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:14.138 "strip_size_kb": 64, 00:31:14.138 "state": "online", 00:31:14.138 "raid_level": "raid5f", 00:31:14.138 "superblock": true, 00:31:14.138 "num_base_bdevs": 3, 00:31:14.138 "num_base_bdevs_discovered": 3, 00:31:14.138 "num_base_bdevs_operational": 3, 00:31:14.138 "base_bdevs_list": [ 00:31:14.138 { 00:31:14.138 "name": "pt1", 00:31:14.138 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:14.138 "is_configured": true, 00:31:14.138 "data_offset": 2048, 00:31:14.138 "data_size": 63488 00:31:14.138 }, 00:31:14.138 { 00:31:14.138 "name": "pt2", 00:31:14.138 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:14.138 "is_configured": true, 00:31:14.138 "data_offset": 2048, 00:31:14.138 "data_size": 63488 00:31:14.138 }, 00:31:14.138 { 00:31:14.138 "name": "pt3", 00:31:14.138 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:14.138 "is_configured": true, 00:31:14.138 "data_offset": 2048, 00:31:14.138 "data_size": 63488 00:31:14.138 } 00:31:14.138 ] 00:31:14.138 } 00:31:14.138 } 00:31:14.138 }' 00:31:14.138 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:31:14.138 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:31:14.138 pt2 00:31:14.138 pt3' 00:31:14.138 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:14.138 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:31:14.138 12:13:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:14.395 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:14.395 "name": "pt1", 00:31:14.395 "aliases": [ 00:31:14.395 "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7" 00:31:14.395 ], 00:31:14.395 "product_name": "passthru", 00:31:14.395 "block_size": 512, 00:31:14.395 "num_blocks": 65536, 00:31:14.395 "uuid": "4b554dfc-cd73-5a00-b07b-91d8e2ed51b7", 00:31:14.395 "assigned_rate_limits": { 00:31:14.395 "rw_ios_per_sec": 0, 00:31:14.395 "rw_mbytes_per_sec": 0, 00:31:14.395 "r_mbytes_per_sec": 0, 00:31:14.395 "w_mbytes_per_sec": 0 00:31:14.395 }, 00:31:14.395 "claimed": true, 00:31:14.395 "claim_type": "exclusive_write", 00:31:14.395 "zoned": false, 00:31:14.395 "supported_io_types": { 00:31:14.395 "read": true, 00:31:14.395 "write": true, 00:31:14.395 "unmap": true, 00:31:14.395 "write_zeroes": true, 00:31:14.395 "flush": true, 00:31:14.396 "reset": true, 00:31:14.396 "compare": false, 00:31:14.396 "compare_and_write": false, 00:31:14.396 "abort": true, 00:31:14.396 "nvme_admin": false, 00:31:14.396 "nvme_io": false 00:31:14.396 }, 00:31:14.396 "memory_domains": [ 00:31:14.396 { 00:31:14.396 "dma_device_id": "system", 00:31:14.396 "dma_device_type": 1 00:31:14.396 }, 00:31:14.396 { 00:31:14.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:14.396 "dma_device_type": 2 00:31:14.396 } 00:31:14.396 ], 00:31:14.396 "driver_specific": { 00:31:14.396 "passthru": { 00:31:14.396 "name": "pt1", 00:31:14.396 "base_bdev_name": "malloc1" 00:31:14.396 } 00:31:14.396 } 00:31:14.396 }' 00:31:14.396 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:14.396 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:14.396 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:14.396 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:14.652 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:14.652 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:14.652 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:14.652 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:14.652 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:14.652 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:14.652 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:14.910 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:14.910 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:14.910 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:31:14.910 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:15.167 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:15.167 "name": "pt2", 00:31:15.167 "aliases": [ 00:31:15.167 "eba1ef94-af13-5eb7-b609-b656c013664c" 00:31:15.167 ], 00:31:15.167 "product_name": "passthru", 00:31:15.167 "block_size": 512, 00:31:15.167 "num_blocks": 65536, 00:31:15.167 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:15.167 "assigned_rate_limits": { 00:31:15.167 "rw_ios_per_sec": 0, 00:31:15.167 "rw_mbytes_per_sec": 0, 00:31:15.167 "r_mbytes_per_sec": 0, 00:31:15.167 "w_mbytes_per_sec": 0 00:31:15.168 }, 00:31:15.168 "claimed": true, 00:31:15.168 "claim_type": "exclusive_write", 00:31:15.168 "zoned": false, 00:31:15.168 "supported_io_types": { 00:31:15.168 "read": true, 00:31:15.168 "write": true, 00:31:15.168 "unmap": true, 00:31:15.168 "write_zeroes": true, 00:31:15.168 "flush": true, 00:31:15.168 "reset": true, 00:31:15.168 "compare": false, 00:31:15.168 "compare_and_write": false, 00:31:15.168 "abort": true, 00:31:15.168 "nvme_admin": false, 00:31:15.168 "nvme_io": false 00:31:15.168 }, 00:31:15.168 "memory_domains": [ 00:31:15.168 { 00:31:15.168 "dma_device_id": "system", 00:31:15.168 "dma_device_type": 1 00:31:15.168 }, 00:31:15.168 { 00:31:15.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.168 "dma_device_type": 2 00:31:15.168 } 00:31:15.168 ], 00:31:15.168 "driver_specific": { 00:31:15.168 "passthru": { 00:31:15.168 "name": "pt2", 00:31:15.168 "base_bdev_name": "malloc2" 00:31:15.168 } 00:31:15.168 } 00:31:15.168 }' 00:31:15.168 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.168 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.168 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:15.168 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.168 12:13:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.168 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:15.168 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:31:15.425 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:31:15.683 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:31:15.683 "name": "pt3", 00:31:15.683 "aliases": [ 00:31:15.683 "c544a070-d418-52f6-8021-b2388e0dcae8" 00:31:15.683 ], 00:31:15.683 "product_name": "passthru", 00:31:15.683 "block_size": 512, 00:31:15.683 "num_blocks": 65536, 00:31:15.683 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:15.683 "assigned_rate_limits": { 00:31:15.683 "rw_ios_per_sec": 0, 00:31:15.683 "rw_mbytes_per_sec": 0, 00:31:15.683 "r_mbytes_per_sec": 0, 00:31:15.683 "w_mbytes_per_sec": 0 00:31:15.683 }, 00:31:15.683 "claimed": true, 00:31:15.683 "claim_type": "exclusive_write", 00:31:15.683 "zoned": false, 00:31:15.683 "supported_io_types": { 00:31:15.683 "read": true, 00:31:15.683 "write": true, 00:31:15.683 "unmap": true, 00:31:15.683 "write_zeroes": true, 00:31:15.683 "flush": true, 00:31:15.683 "reset": true, 00:31:15.683 "compare": false, 00:31:15.683 "compare_and_write": false, 00:31:15.683 "abort": true, 00:31:15.683 "nvme_admin": false, 00:31:15.683 "nvme_io": false 00:31:15.683 }, 00:31:15.683 "memory_domains": [ 00:31:15.683 { 00:31:15.683 "dma_device_id": "system", 00:31:15.683 "dma_device_type": 1 00:31:15.683 }, 00:31:15.683 { 00:31:15.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:31:15.683 "dma_device_type": 2 00:31:15.683 } 00:31:15.683 ], 00:31:15.683 "driver_specific": { 00:31:15.683 "passthru": { 00:31:15.683 "name": "pt3", 00:31:15.683 "base_bdev_name": "malloc3" 00:31:15.683 } 00:31:15.683 } 00:31:15.683 }' 00:31:15.683 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.683 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:31:15.942 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:16.200 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:31:16.200 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:31:16.200 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:16.200 12:13:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:31:16.458 [2024-07-21 12:13:15.154687] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:16.458 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f3fbda6f-1a3c-4de8-8aa4-88326b3fa946 '!=' f3fbda6f-1a3c-4de8-8aa4-88326b3fa946 ']' 00:31:16.458 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:31:16.458 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:31:16.458 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:31:16.458 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:31:16.717 [2024-07-21 12:13:15.422688] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:16.717 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:16.974 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:16.974 "name": "raid_bdev1", 00:31:16.974 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:16.974 "strip_size_kb": 64, 00:31:16.974 "state": "online", 00:31:16.974 "raid_level": "raid5f", 00:31:16.974 "superblock": true, 00:31:16.974 "num_base_bdevs": 3, 00:31:16.974 "num_base_bdevs_discovered": 2, 00:31:16.974 "num_base_bdevs_operational": 2, 00:31:16.974 "base_bdevs_list": [ 00:31:16.974 { 00:31:16.974 "name": null, 00:31:16.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:16.974 "is_configured": false, 00:31:16.974 "data_offset": 2048, 00:31:16.974 "data_size": 63488 00:31:16.974 }, 00:31:16.974 { 00:31:16.974 "name": "pt2", 00:31:16.974 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:16.974 "is_configured": true, 00:31:16.974 "data_offset": 2048, 00:31:16.974 "data_size": 63488 00:31:16.974 }, 00:31:16.974 { 00:31:16.974 "name": "pt3", 00:31:16.974 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:16.974 "is_configured": true, 00:31:16.974 "data_offset": 2048, 00:31:16.974 "data_size": 63488 00:31:16.974 } 00:31:16.974 ] 00:31:16.974 }' 00:31:16.974 12:13:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:16.974 12:13:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:17.539 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:17.797 [2024-07-21 12:13:16.494886] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:17.797 [2024-07-21 12:13:16.495085] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:17.797 [2024-07-21 12:13:16.495292] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:17.797 [2024-07-21 12:13:16.495474] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:17.797 [2024-07-21 12:13:16.495608] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:31:17.797 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:31:17.797 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.055 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:31:18.055 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:31:18.055 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:31:18.055 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:18.055 12:13:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:31:18.314 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:18.314 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:18.314 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:18.572 [2024-07-21 12:13:17.382957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:18.572 [2024-07-21 12:13:17.383210] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:18.572 [2024-07-21 12:13:17.383299] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:31:18.572 [2024-07-21 12:13:17.383555] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:18.572 [2024-07-21 12:13:17.385947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:18.572 [2024-07-21 12:13:17.386117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:18.572 [2024-07-21 12:13:17.386365] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:18.572 [2024-07-21 12:13:17.386534] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:18.572 pt2 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:18.572 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:18.830 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:18.830 "name": "raid_bdev1", 00:31:18.830 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:18.830 "strip_size_kb": 64, 00:31:18.830 "state": "configuring", 00:31:18.830 "raid_level": "raid5f", 00:31:18.830 "superblock": true, 00:31:18.830 "num_base_bdevs": 3, 00:31:18.830 "num_base_bdevs_discovered": 1, 00:31:18.830 "num_base_bdevs_operational": 2, 00:31:18.830 "base_bdevs_list": [ 00:31:18.830 { 00:31:18.830 "name": null, 00:31:18.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:18.830 "is_configured": false, 00:31:18.830 "data_offset": 2048, 00:31:18.830 "data_size": 63488 00:31:18.830 }, 00:31:18.830 { 00:31:18.830 "name": "pt2", 00:31:18.830 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:18.830 "is_configured": true, 00:31:18.830 "data_offset": 2048, 00:31:18.830 "data_size": 63488 00:31:18.830 }, 00:31:18.830 { 00:31:18.830 "name": null, 00:31:18.830 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:18.830 "is_configured": false, 00:31:18.830 "data_offset": 2048, 00:31:18.830 "data_size": 63488 00:31:18.830 } 00:31:18.830 ] 00:31:18.830 }' 00:31:18.830 12:13:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:18.830 12:13:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:19.775 [2024-07-21 12:13:18.591266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:19.775 [2024-07-21 12:13:18.591594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:19.775 [2024-07-21 12:13:18.591764] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:31:19.775 [2024-07-21 12:13:18.591957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:19.775 [2024-07-21 12:13:18.592674] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:19.775 [2024-07-21 12:13:18.592860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:19.775 [2024-07-21 12:13:18.593138] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:19.775 [2024-07-21 12:13:18.593291] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:19.775 [2024-07-21 12:13:18.593567] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:31:19.775 [2024-07-21 12:13:18.593675] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:19.775 [2024-07-21 12:13:18.593797] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:19.775 [2024-07-21 12:13:18.594714] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:31:19.775 [2024-07-21 12:13:18.594871] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:31:19.775 [2024-07-21 12:13:18.595292] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:19.775 pt3 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:19.775 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:20.033 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:20.033 "name": "raid_bdev1", 00:31:20.033 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:20.033 "strip_size_kb": 64, 00:31:20.033 "state": "online", 00:31:20.033 "raid_level": "raid5f", 00:31:20.033 "superblock": true, 00:31:20.033 "num_base_bdevs": 3, 00:31:20.033 "num_base_bdevs_discovered": 2, 00:31:20.033 "num_base_bdevs_operational": 2, 00:31:20.033 "base_bdevs_list": [ 00:31:20.033 { 00:31:20.033 "name": null, 00:31:20.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:20.033 "is_configured": false, 00:31:20.033 "data_offset": 2048, 00:31:20.033 "data_size": 63488 00:31:20.033 }, 00:31:20.033 { 00:31:20.033 "name": "pt2", 00:31:20.033 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:20.033 "is_configured": true, 00:31:20.033 "data_offset": 2048, 00:31:20.033 "data_size": 63488 00:31:20.033 }, 00:31:20.033 { 00:31:20.033 "name": "pt3", 00:31:20.033 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:20.033 "is_configured": true, 00:31:20.033 "data_offset": 2048, 00:31:20.033 "data_size": 63488 00:31:20.033 } 00:31:20.033 ] 00:31:20.033 }' 00:31:20.033 12:13:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:20.033 12:13:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:20.968 12:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:20.968 [2024-07-21 12:13:19.735585] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:20.968 [2024-07-21 12:13:19.735813] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:20.968 [2024-07-21 12:13:19.736004] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:20.968 [2024-07-21 12:13:19.736204] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:20.968 [2024-07-21 12:13:19.736308] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:31:20.968 12:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:31:20.968 12:13:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:21.226 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:31:21.226 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:31:21.226 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:31:21.226 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:31:21.226 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:31:21.484 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:21.742 [2024-07-21 12:13:20.607802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:21.742 [2024-07-21 12:13:20.608099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:21.742 [2024-07-21 12:13:20.608344] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:31:21.742 [2024-07-21 12:13:20.608506] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:21.999 [2024-07-21 12:13:20.611852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:21.999 [2024-07-21 12:13:20.612042] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:21.999 [2024-07-21 12:13:20.612334] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:21.999 [2024-07-21 12:13:20.612493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:21.999 [2024-07-21 12:13:20.612829] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:21.999 [2024-07-21 12:13:20.613011] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:21.999 [2024-07-21 12:13:20.613088] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state configuring 00:31:21.999 [2024-07-21 12:13:20.613426] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:21.999 pt1 00:31:21.999 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:31:21.999 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:31:21.999 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:21.999 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:31:21.999 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:21.999 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:22.000 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:22.000 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:22.000 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:22.000 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:22.000 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:22.000 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:22.000 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.258 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:22.258 "name": "raid_bdev1", 00:31:22.258 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:22.258 "strip_size_kb": 64, 00:31:22.258 "state": "configuring", 00:31:22.258 "raid_level": "raid5f", 00:31:22.258 "superblock": true, 00:31:22.258 "num_base_bdevs": 3, 00:31:22.258 "num_base_bdevs_discovered": 1, 00:31:22.258 "num_base_bdevs_operational": 2, 00:31:22.258 "base_bdevs_list": [ 00:31:22.258 { 00:31:22.258 "name": null, 00:31:22.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.258 "is_configured": false, 00:31:22.258 "data_offset": 2048, 00:31:22.258 "data_size": 63488 00:31:22.258 }, 00:31:22.258 { 00:31:22.258 "name": "pt2", 00:31:22.258 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:22.258 "is_configured": true, 00:31:22.258 "data_offset": 2048, 00:31:22.258 "data_size": 63488 00:31:22.258 }, 00:31:22.258 { 00:31:22.258 "name": null, 00:31:22.258 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:22.258 "is_configured": false, 00:31:22.258 "data_offset": 2048, 00:31:22.258 "data_size": 63488 00:31:22.258 } 00:31:22.258 ] 00:31:22.258 }' 00:31:22.258 12:13:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:22.258 12:13:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.823 12:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:22.823 12:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:31:23.081 12:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:31:23.081 12:13:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:23.340 [2024-07-21 12:13:22.055262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:23.340 [2024-07-21 12:13:22.055639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:23.340 [2024-07-21 12:13:22.055725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:31:23.340 [2024-07-21 12:13:22.056033] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:23.340 [2024-07-21 12:13:22.056642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:23.340 [2024-07-21 12:13:22.056709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:23.340 [2024-07-21 12:13:22.056846] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:23.340 [2024-07-21 12:13:22.056900] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:23.340 [2024-07-21 12:13:22.057119] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:31:23.340 [2024-07-21 12:13:22.057164] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:23.340 [2024-07-21 12:13:22.057260] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:23.340 [2024-07-21 12:13:22.058145] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:31:23.340 [2024-07-21 12:13:22.058294] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:31:23.340 [2024-07-21 12:13:22.058618] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:23.340 pt3 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:23.340 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.598 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:23.598 "name": "raid_bdev1", 00:31:23.598 "uuid": "f3fbda6f-1a3c-4de8-8aa4-88326b3fa946", 00:31:23.598 "strip_size_kb": 64, 00:31:23.598 "state": "online", 00:31:23.598 "raid_level": "raid5f", 00:31:23.598 "superblock": true, 00:31:23.598 "num_base_bdevs": 3, 00:31:23.598 "num_base_bdevs_discovered": 2, 00:31:23.598 "num_base_bdevs_operational": 2, 00:31:23.598 "base_bdevs_list": [ 00:31:23.598 { 00:31:23.598 "name": null, 00:31:23.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.598 "is_configured": false, 00:31:23.598 "data_offset": 2048, 00:31:23.598 "data_size": 63488 00:31:23.598 }, 00:31:23.598 { 00:31:23.598 "name": "pt2", 00:31:23.598 "uuid": "eba1ef94-af13-5eb7-b609-b656c013664c", 00:31:23.598 "is_configured": true, 00:31:23.598 "data_offset": 2048, 00:31:23.598 "data_size": 63488 00:31:23.598 }, 00:31:23.598 { 00:31:23.598 "name": "pt3", 00:31:23.598 "uuid": "c544a070-d418-52f6-8021-b2388e0dcae8", 00:31:23.598 "is_configured": true, 00:31:23.598 "data_offset": 2048, 00:31:23.598 "data_size": 63488 00:31:23.598 } 00:31:23.598 ] 00:31:23.598 }' 00:31:23.598 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:23.598 12:13:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.164 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:31:24.164 12:13:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:24.421 12:13:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:31:24.421 12:13:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:24.421 12:13:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:31:24.678 [2024-07-21 12:13:23.469157] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' f3fbda6f-1a3c-4de8-8aa4-88326b3fa946 '!=' f3fbda6f-1a3c-4de8-8aa4-88326b3fa946 ']' 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 162146 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 162146 ']' 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 162146 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 162146 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 162146' 00:31:24.678 killing process with pid 162146 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 162146 00:31:24.678 [2024-07-21 12:13:23.510036] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:24.678 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 162146 00:31:24.678 [2024-07-21 12:13:23.510254] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:24.678 [2024-07-21 12:13:23.510492] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:24.678 [2024-07-21 12:13:23.510694] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:31:24.936 [2024-07-21 12:13:23.549115] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:25.194 ************************************ 00:31:25.194 END TEST raid5f_superblock_test 00:31:25.194 ************************************ 00:31:25.194 12:13:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:31:25.194 00:31:25.194 real 0m22.855s 00:31:25.194 user 0m43.158s 00:31:25.194 sys 0m2.805s 00:31:25.194 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:25.194 12:13:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.194 12:13:23 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:31:25.194 12:13:23 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:31:25.194 12:13:23 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:31:25.194 12:13:23 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:25.194 12:13:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:25.194 ************************************ 00:31:25.194 START TEST raid5f_rebuild_test 00:31:25.194 ************************************ 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 false false true 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=162894 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 162894 /var/tmp/spdk-raid.sock 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 162894 ']' 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:25.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:25.194 12:13:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.194 [2024-07-21 12:13:23.999366] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:25.194 [2024-07-21 12:13:23.999765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162894 ] 00:31:25.194 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:25.194 Zero copy mechanism will not be used. 00:31:25.452 [2024-07-21 12:13:24.171279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.452 [2024-07-21 12:13:24.280452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:25.710 [2024-07-21 12:13:24.358473] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:26.293 12:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:26.293 12:13:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:31:26.293 12:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:26.293 12:13:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:26.293 BaseBdev1_malloc 00:31:26.566 12:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:26.566 [2024-07-21 12:13:25.378581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:26.566 [2024-07-21 12:13:25.378893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:26.566 [2024-07-21 12:13:25.378988] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:31:26.566 [2024-07-21 12:13:25.379297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:26.566 [2024-07-21 12:13:25.382153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:26.566 [2024-07-21 12:13:25.382346] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:26.566 BaseBdev1 00:31:26.566 12:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:26.566 12:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:26.825 BaseBdev2_malloc 00:31:26.825 12:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:27.084 [2024-07-21 12:13:25.820698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:27.084 [2024-07-21 12:13:25.820992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.084 [2024-07-21 12:13:25.821184] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:27.084 [2024-07-21 12:13:25.821385] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.084 [2024-07-21 12:13:25.823854] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.084 [2024-07-21 12:13:25.824073] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:27.084 BaseBdev2 00:31:27.084 12:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:27.084 12:13:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:27.342 BaseBdev3_malloc 00:31:27.342 12:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:27.601 [2024-07-21 12:13:26.223843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:27.602 [2024-07-21 12:13:26.224083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:27.602 [2024-07-21 12:13:26.224162] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:31:27.602 [2024-07-21 12:13:26.224479] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:27.602 [2024-07-21 12:13:26.226936] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:27.602 [2024-07-21 12:13:26.227125] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:27.602 BaseBdev3 00:31:27.602 12:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:27.602 spare_malloc 00:31:27.602 12:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:27.861 spare_delay 00:31:27.861 12:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:28.120 [2024-07-21 12:13:26.846096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:28.120 [2024-07-21 12:13:26.846280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:28.120 [2024-07-21 12:13:26.846348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:28.120 [2024-07-21 12:13:26.846519] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:28.120 [2024-07-21 12:13:26.849016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:28.120 [2024-07-21 12:13:26.849164] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:28.120 spare 00:31:28.120 12:13:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:31:28.378 [2024-07-21 12:13:27.102244] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:28.378 [2024-07-21 12:13:27.104375] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:28.378 [2024-07-21 12:13:27.104569] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:28.378 [2024-07-21 12:13:27.104727] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:31:28.378 [2024-07-21 12:13:27.104776] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:31:28.378 [2024-07-21 12:13:27.105053] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:31:28.378 [2024-07-21 12:13:27.105946] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:31:28.378 [2024-07-21 12:13:27.106083] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:31:28.378 [2024-07-21 12:13:27.106390] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:28.378 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:28.636 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:28.636 "name": "raid_bdev1", 00:31:28.636 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:28.636 "strip_size_kb": 64, 00:31:28.636 "state": "online", 00:31:28.636 "raid_level": "raid5f", 00:31:28.636 "superblock": false, 00:31:28.636 "num_base_bdevs": 3, 00:31:28.636 "num_base_bdevs_discovered": 3, 00:31:28.636 "num_base_bdevs_operational": 3, 00:31:28.636 "base_bdevs_list": [ 00:31:28.636 { 00:31:28.636 "name": "BaseBdev1", 00:31:28.636 "uuid": "0d68797d-d956-58c2-b254-13cc2ccda6fd", 00:31:28.636 "is_configured": true, 00:31:28.636 "data_offset": 0, 00:31:28.636 "data_size": 65536 00:31:28.636 }, 00:31:28.636 { 00:31:28.636 "name": "BaseBdev2", 00:31:28.636 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:28.636 "is_configured": true, 00:31:28.636 "data_offset": 0, 00:31:28.636 "data_size": 65536 00:31:28.636 }, 00:31:28.636 { 00:31:28.636 "name": "BaseBdev3", 00:31:28.636 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:28.636 "is_configured": true, 00:31:28.636 "data_offset": 0, 00:31:28.636 "data_size": 65536 00:31:28.636 } 00:31:28.636 ] 00:31:28.636 }' 00:31:28.636 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:28.636 12:13:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:29.203 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:29.203 12:13:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:29.461 [2024-07-21 12:13:28.206794] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:29.461 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=131072 00:31:29.461 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:29.461 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:29.719 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:29.978 [2024-07-21 12:13:28.750785] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:29.978 /dev/nbd0 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:29.978 1+0 records in 00:31:29.978 1+0 records out 00:31:29.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459455 s, 8.9 MB/s 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 128 00:31:29.978 12:13:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:31:30.545 512+0 records in 00:31:30.545 512+0 records out 00:31:30.545 67108864 bytes (67 MB, 64 MiB) copied, 0.385664 s, 174 MB/s 00:31:30.545 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:30.545 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:30.545 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:30.545 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:30.545 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:30.545 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.545 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:30.803 [2024-07-21 12:13:29.490857] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:30.803 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:31.061 [2024-07-21 12:13:29.746426] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:31.061 12:13:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:31.319 12:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:31.319 "name": "raid_bdev1", 00:31:31.319 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:31.319 "strip_size_kb": 64, 00:31:31.319 "state": "online", 00:31:31.319 "raid_level": "raid5f", 00:31:31.319 "superblock": false, 00:31:31.319 "num_base_bdevs": 3, 00:31:31.319 "num_base_bdevs_discovered": 2, 00:31:31.319 "num_base_bdevs_operational": 2, 00:31:31.319 "base_bdevs_list": [ 00:31:31.319 { 00:31:31.319 "name": null, 00:31:31.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:31.319 "is_configured": false, 00:31:31.319 "data_offset": 0, 00:31:31.319 "data_size": 65536 00:31:31.319 }, 00:31:31.319 { 00:31:31.319 "name": "BaseBdev2", 00:31:31.319 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:31.319 "is_configured": true, 00:31:31.319 "data_offset": 0, 00:31:31.319 "data_size": 65536 00:31:31.319 }, 00:31:31.319 { 00:31:31.319 "name": "BaseBdev3", 00:31:31.319 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:31.319 "is_configured": true, 00:31:31.319 "data_offset": 0, 00:31:31.319 "data_size": 65536 00:31:31.319 } 00:31:31.319 ] 00:31:31.319 }' 00:31:31.319 12:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:31.319 12:13:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.887 12:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:32.145 [2024-07-21 12:13:30.954666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:32.145 [2024-07-21 12:13:30.960956] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:31:32.145 [2024-07-21 12:13:30.963489] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:32.145 12:13:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:33.519 12:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:33.519 12:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:33.519 12:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:33.519 12:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:33.519 12:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:33.519 12:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:33.519 12:13:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:33.519 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:33.519 "name": "raid_bdev1", 00:31:33.519 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:33.519 "strip_size_kb": 64, 00:31:33.519 "state": "online", 00:31:33.519 "raid_level": "raid5f", 00:31:33.519 "superblock": false, 00:31:33.519 "num_base_bdevs": 3, 00:31:33.519 "num_base_bdevs_discovered": 3, 00:31:33.519 "num_base_bdevs_operational": 3, 00:31:33.519 "process": { 00:31:33.519 "type": "rebuild", 00:31:33.519 "target": "spare", 00:31:33.519 "progress": { 00:31:33.519 "blocks": 24576, 00:31:33.519 "percent": 18 00:31:33.519 } 00:31:33.519 }, 00:31:33.519 "base_bdevs_list": [ 00:31:33.519 { 00:31:33.519 "name": "spare", 00:31:33.519 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:33.519 "is_configured": true, 00:31:33.519 "data_offset": 0, 00:31:33.519 "data_size": 65536 00:31:33.519 }, 00:31:33.519 { 00:31:33.519 "name": "BaseBdev2", 00:31:33.519 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:33.519 "is_configured": true, 00:31:33.519 "data_offset": 0, 00:31:33.519 "data_size": 65536 00:31:33.519 }, 00:31:33.519 { 00:31:33.519 "name": "BaseBdev3", 00:31:33.519 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:33.519 "is_configured": true, 00:31:33.519 "data_offset": 0, 00:31:33.519 "data_size": 65536 00:31:33.519 } 00:31:33.519 ] 00:31:33.519 }' 00:31:33.519 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:33.519 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:33.519 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:33.519 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:33.519 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:33.778 [2024-07-21 12:13:32.613698] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:34.036 [2024-07-21 12:13:32.679868] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:34.036 [2024-07-21 12:13:32.680124] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:34.036 [2024-07-21 12:13:32.680200] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:34.036 [2024-07-21 12:13:32.680323] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.036 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:34.293 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:34.293 "name": "raid_bdev1", 00:31:34.293 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:34.293 "strip_size_kb": 64, 00:31:34.293 "state": "online", 00:31:34.293 "raid_level": "raid5f", 00:31:34.293 "superblock": false, 00:31:34.293 "num_base_bdevs": 3, 00:31:34.293 "num_base_bdevs_discovered": 2, 00:31:34.293 "num_base_bdevs_operational": 2, 00:31:34.293 "base_bdevs_list": [ 00:31:34.293 { 00:31:34.293 "name": null, 00:31:34.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:34.293 "is_configured": false, 00:31:34.293 "data_offset": 0, 00:31:34.293 "data_size": 65536 00:31:34.293 }, 00:31:34.293 { 00:31:34.293 "name": "BaseBdev2", 00:31:34.293 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:34.293 "is_configured": true, 00:31:34.293 "data_offset": 0, 00:31:34.293 "data_size": 65536 00:31:34.293 }, 00:31:34.293 { 00:31:34.293 "name": "BaseBdev3", 00:31:34.293 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:34.293 "is_configured": true, 00:31:34.293 "data_offset": 0, 00:31:34.293 "data_size": 65536 00:31:34.293 } 00:31:34.293 ] 00:31:34.293 }' 00:31:34.293 12:13:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:34.293 12:13:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:34.858 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:34.858 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:34.858 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:34.858 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:34.858 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:34.858 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:34.858 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:35.115 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:35.115 "name": "raid_bdev1", 00:31:35.115 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:35.115 "strip_size_kb": 64, 00:31:35.115 "state": "online", 00:31:35.115 "raid_level": "raid5f", 00:31:35.115 "superblock": false, 00:31:35.115 "num_base_bdevs": 3, 00:31:35.115 "num_base_bdevs_discovered": 2, 00:31:35.115 "num_base_bdevs_operational": 2, 00:31:35.115 "base_bdevs_list": [ 00:31:35.115 { 00:31:35.115 "name": null, 00:31:35.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:35.115 "is_configured": false, 00:31:35.115 "data_offset": 0, 00:31:35.115 "data_size": 65536 00:31:35.115 }, 00:31:35.115 { 00:31:35.115 "name": "BaseBdev2", 00:31:35.115 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:35.115 "is_configured": true, 00:31:35.115 "data_offset": 0, 00:31:35.115 "data_size": 65536 00:31:35.115 }, 00:31:35.115 { 00:31:35.115 "name": "BaseBdev3", 00:31:35.115 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:35.115 "is_configured": true, 00:31:35.115 "data_offset": 0, 00:31:35.115 "data_size": 65536 00:31:35.115 } 00:31:35.115 ] 00:31:35.115 }' 00:31:35.115 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:35.115 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:35.115 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:35.115 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:35.115 12:13:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:35.373 [2024-07-21 12:13:34.121568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:35.373 [2024-07-21 12:13:34.128204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b410 00:31:35.373 [2024-07-21 12:13:34.130733] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:35.373 12:13:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:36.306 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:36.306 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.306 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:36.306 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:36.306 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.306 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.306 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.564 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.564 "name": "raid_bdev1", 00:31:36.564 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:36.564 "strip_size_kb": 64, 00:31:36.564 "state": "online", 00:31:36.564 "raid_level": "raid5f", 00:31:36.564 "superblock": false, 00:31:36.564 "num_base_bdevs": 3, 00:31:36.564 "num_base_bdevs_discovered": 3, 00:31:36.564 "num_base_bdevs_operational": 3, 00:31:36.564 "process": { 00:31:36.564 "type": "rebuild", 00:31:36.564 "target": "spare", 00:31:36.564 "progress": { 00:31:36.564 "blocks": 24576, 00:31:36.564 "percent": 18 00:31:36.564 } 00:31:36.564 }, 00:31:36.564 "base_bdevs_list": [ 00:31:36.564 { 00:31:36.564 "name": "spare", 00:31:36.564 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:36.564 "is_configured": true, 00:31:36.564 "data_offset": 0, 00:31:36.564 "data_size": 65536 00:31:36.564 }, 00:31:36.564 { 00:31:36.564 "name": "BaseBdev2", 00:31:36.564 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:36.564 "is_configured": true, 00:31:36.564 "data_offset": 0, 00:31:36.564 "data_size": 65536 00:31:36.564 }, 00:31:36.564 { 00:31:36.564 "name": "BaseBdev3", 00:31:36.564 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:36.564 "is_configured": true, 00:31:36.564 "data_offset": 0, 00:31:36.564 "data_size": 65536 00:31:36.564 } 00:31:36.564 ] 00:31:36.564 }' 00:31:36.564 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1100 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:36.821 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:36.821 "name": "raid_bdev1", 00:31:36.821 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:36.821 "strip_size_kb": 64, 00:31:36.821 "state": "online", 00:31:36.821 "raid_level": "raid5f", 00:31:36.821 "superblock": false, 00:31:36.821 "num_base_bdevs": 3, 00:31:36.821 "num_base_bdevs_discovered": 3, 00:31:36.821 "num_base_bdevs_operational": 3, 00:31:36.821 "process": { 00:31:36.821 "type": "rebuild", 00:31:36.821 "target": "spare", 00:31:36.821 "progress": { 00:31:36.821 "blocks": 30720, 00:31:36.821 "percent": 23 00:31:36.821 } 00:31:36.821 }, 00:31:36.821 "base_bdevs_list": [ 00:31:36.821 { 00:31:36.821 "name": "spare", 00:31:36.821 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:36.822 "is_configured": true, 00:31:36.822 "data_offset": 0, 00:31:36.822 "data_size": 65536 00:31:36.822 }, 00:31:36.822 { 00:31:36.822 "name": "BaseBdev2", 00:31:36.822 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:36.822 "is_configured": true, 00:31:36.822 "data_offset": 0, 00:31:36.822 "data_size": 65536 00:31:36.822 }, 00:31:36.822 { 00:31:36.822 "name": "BaseBdev3", 00:31:36.822 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:36.822 "is_configured": true, 00:31:36.822 "data_offset": 0, 00:31:36.822 "data_size": 65536 00:31:36.822 } 00:31:36.822 ] 00:31:36.822 }' 00:31:36.822 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:37.079 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:37.079 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:37.079 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:37.079 12:13:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:38.012 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:38.012 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:38.012 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:38.012 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:38.012 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:38.013 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:38.013 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:38.013 12:13:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:38.270 12:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:38.270 "name": "raid_bdev1", 00:31:38.270 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:38.270 "strip_size_kb": 64, 00:31:38.270 "state": "online", 00:31:38.270 "raid_level": "raid5f", 00:31:38.270 "superblock": false, 00:31:38.270 "num_base_bdevs": 3, 00:31:38.270 "num_base_bdevs_discovered": 3, 00:31:38.270 "num_base_bdevs_operational": 3, 00:31:38.270 "process": { 00:31:38.270 "type": "rebuild", 00:31:38.270 "target": "spare", 00:31:38.270 "progress": { 00:31:38.270 "blocks": 57344, 00:31:38.270 "percent": 43 00:31:38.270 } 00:31:38.270 }, 00:31:38.270 "base_bdevs_list": [ 00:31:38.270 { 00:31:38.270 "name": "spare", 00:31:38.270 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:38.270 "is_configured": true, 00:31:38.270 "data_offset": 0, 00:31:38.270 "data_size": 65536 00:31:38.270 }, 00:31:38.270 { 00:31:38.270 "name": "BaseBdev2", 00:31:38.270 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:38.270 "is_configured": true, 00:31:38.270 "data_offset": 0, 00:31:38.270 "data_size": 65536 00:31:38.270 }, 00:31:38.270 { 00:31:38.270 "name": "BaseBdev3", 00:31:38.270 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:38.270 "is_configured": true, 00:31:38.270 "data_offset": 0, 00:31:38.270 "data_size": 65536 00:31:38.270 } 00:31:38.270 ] 00:31:38.270 }' 00:31:38.270 12:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:38.270 12:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:38.270 12:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:38.270 12:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:38.270 12:13:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:39.649 "name": "raid_bdev1", 00:31:39.649 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:39.649 "strip_size_kb": 64, 00:31:39.649 "state": "online", 00:31:39.649 "raid_level": "raid5f", 00:31:39.649 "superblock": false, 00:31:39.649 "num_base_bdevs": 3, 00:31:39.649 "num_base_bdevs_discovered": 3, 00:31:39.649 "num_base_bdevs_operational": 3, 00:31:39.649 "process": { 00:31:39.649 "type": "rebuild", 00:31:39.649 "target": "spare", 00:31:39.649 "progress": { 00:31:39.649 "blocks": 83968, 00:31:39.649 "percent": 64 00:31:39.649 } 00:31:39.649 }, 00:31:39.649 "base_bdevs_list": [ 00:31:39.649 { 00:31:39.649 "name": "spare", 00:31:39.649 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:39.649 "is_configured": true, 00:31:39.649 "data_offset": 0, 00:31:39.649 "data_size": 65536 00:31:39.649 }, 00:31:39.649 { 00:31:39.649 "name": "BaseBdev2", 00:31:39.649 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:39.649 "is_configured": true, 00:31:39.649 "data_offset": 0, 00:31:39.649 "data_size": 65536 00:31:39.649 }, 00:31:39.649 { 00:31:39.649 "name": "BaseBdev3", 00:31:39.649 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:39.649 "is_configured": true, 00:31:39.649 "data_offset": 0, 00:31:39.649 "data_size": 65536 00:31:39.649 } 00:31:39.649 ] 00:31:39.649 }' 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:39.649 12:13:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:40.580 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:40.580 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:40.580 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:40.580 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:40.580 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:40.580 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:40.580 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:40.837 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.837 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:40.837 "name": "raid_bdev1", 00:31:40.837 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:40.837 "strip_size_kb": 64, 00:31:40.837 "state": "online", 00:31:40.837 "raid_level": "raid5f", 00:31:40.837 "superblock": false, 00:31:40.837 "num_base_bdevs": 3, 00:31:40.837 "num_base_bdevs_discovered": 3, 00:31:40.837 "num_base_bdevs_operational": 3, 00:31:40.837 "process": { 00:31:40.837 "type": "rebuild", 00:31:40.837 "target": "spare", 00:31:40.837 "progress": { 00:31:40.837 "blocks": 110592, 00:31:40.837 "percent": 84 00:31:40.837 } 00:31:40.837 }, 00:31:40.837 "base_bdevs_list": [ 00:31:40.837 { 00:31:40.837 "name": "spare", 00:31:40.837 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:40.837 "is_configured": true, 00:31:40.837 "data_offset": 0, 00:31:40.837 "data_size": 65536 00:31:40.837 }, 00:31:40.837 { 00:31:40.837 "name": "BaseBdev2", 00:31:40.837 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:40.837 "is_configured": true, 00:31:40.837 "data_offset": 0, 00:31:40.837 "data_size": 65536 00:31:40.837 }, 00:31:40.837 { 00:31:40.837 "name": "BaseBdev3", 00:31:40.837 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:40.837 "is_configured": true, 00:31:40.837 "data_offset": 0, 00:31:40.837 "data_size": 65536 00:31:40.837 } 00:31:40.837 ] 00:31:40.837 }' 00:31:40.837 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:40.837 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:40.837 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:41.094 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:41.094 12:13:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:42.028 [2024-07-21 12:13:40.589885] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:42.028 [2024-07-21 12:13:40.590124] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:42.028 [2024-07-21 12:13:40.590344] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.028 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.287 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:42.287 "name": "raid_bdev1", 00:31:42.287 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:42.287 "strip_size_kb": 64, 00:31:42.287 "state": "online", 00:31:42.287 "raid_level": "raid5f", 00:31:42.287 "superblock": false, 00:31:42.287 "num_base_bdevs": 3, 00:31:42.287 "num_base_bdevs_discovered": 3, 00:31:42.287 "num_base_bdevs_operational": 3, 00:31:42.287 "base_bdevs_list": [ 00:31:42.287 { 00:31:42.287 "name": "spare", 00:31:42.287 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:42.287 "is_configured": true, 00:31:42.287 "data_offset": 0, 00:31:42.287 "data_size": 65536 00:31:42.287 }, 00:31:42.287 { 00:31:42.287 "name": "BaseBdev2", 00:31:42.287 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:42.287 "is_configured": true, 00:31:42.287 "data_offset": 0, 00:31:42.287 "data_size": 65536 00:31:42.287 }, 00:31:42.287 { 00:31:42.287 "name": "BaseBdev3", 00:31:42.287 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:42.287 "is_configured": true, 00:31:42.287 "data_offset": 0, 00:31:42.287 "data_size": 65536 00:31:42.287 } 00:31:42.287 ] 00:31:42.287 }' 00:31:42.287 12:13:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.287 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:42.545 "name": "raid_bdev1", 00:31:42.545 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:42.545 "strip_size_kb": 64, 00:31:42.545 "state": "online", 00:31:42.545 "raid_level": "raid5f", 00:31:42.545 "superblock": false, 00:31:42.545 "num_base_bdevs": 3, 00:31:42.545 "num_base_bdevs_discovered": 3, 00:31:42.545 "num_base_bdevs_operational": 3, 00:31:42.545 "base_bdevs_list": [ 00:31:42.545 { 00:31:42.545 "name": "spare", 00:31:42.545 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:42.545 "is_configured": true, 00:31:42.545 "data_offset": 0, 00:31:42.545 "data_size": 65536 00:31:42.545 }, 00:31:42.545 { 00:31:42.545 "name": "BaseBdev2", 00:31:42.545 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:42.545 "is_configured": true, 00:31:42.545 "data_offset": 0, 00:31:42.545 "data_size": 65536 00:31:42.545 }, 00:31:42.545 { 00:31:42.545 "name": "BaseBdev3", 00:31:42.545 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:42.545 "is_configured": true, 00:31:42.545 "data_offset": 0, 00:31:42.545 "data_size": 65536 00:31:42.545 } 00:31:42.545 ] 00:31:42.545 }' 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:42.545 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.804 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:42.804 "name": "raid_bdev1", 00:31:42.804 "uuid": "4c8a6209-d84f-4457-aac7-04905dd68a8d", 00:31:42.804 "strip_size_kb": 64, 00:31:42.804 "state": "online", 00:31:42.804 "raid_level": "raid5f", 00:31:42.804 "superblock": false, 00:31:42.804 "num_base_bdevs": 3, 00:31:42.804 "num_base_bdevs_discovered": 3, 00:31:42.804 "num_base_bdevs_operational": 3, 00:31:42.804 "base_bdevs_list": [ 00:31:42.804 { 00:31:42.804 "name": "spare", 00:31:42.804 "uuid": "a81108c3-9def-51ba-b956-a1b0fc01aa54", 00:31:42.804 "is_configured": true, 00:31:42.804 "data_offset": 0, 00:31:42.804 "data_size": 65536 00:31:42.804 }, 00:31:42.804 { 00:31:42.804 "name": "BaseBdev2", 00:31:42.804 "uuid": "2f5a4ce8-84f8-5959-8159-1ad24c5f4a32", 00:31:42.804 "is_configured": true, 00:31:42.804 "data_offset": 0, 00:31:42.804 "data_size": 65536 00:31:42.804 }, 00:31:42.804 { 00:31:42.804 "name": "BaseBdev3", 00:31:42.804 "uuid": "813072a1-d9d8-51ae-bfa2-df1d954eed3b", 00:31:42.804 "is_configured": true, 00:31:42.804 "data_offset": 0, 00:31:42.804 "data_size": 65536 00:31:42.804 } 00:31:42.804 ] 00:31:42.804 }' 00:31:42.804 12:13:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:42.804 12:13:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:43.371 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:31:43.630 [2024-07-21 12:13:42.417854] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:43.630 [2024-07-21 12:13:42.418024] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:43.630 [2024-07-21 12:13:42.418245] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:43.630 [2024-07-21 12:13:42.418465] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:43.630 [2024-07-21 12:13:42.418585] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:31:43.630 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:43.630 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:43.888 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:44.146 /dev/nbd0 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:44.146 1+0 records in 00:31:44.146 1+0 records out 00:31:44.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000815887 s, 5.0 MB/s 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:44.146 12:13:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:31:44.404 /dev/nbd1 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:44.404 1+0 records in 00:31:44.404 1+0 records out 00:31:44.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087942 s, 4.7 MB/s 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:44.404 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:44.661 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 162894 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 162894 ']' 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 162894 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 162894 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 162894' 00:31:44.919 killing process with pid 162894 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 162894 00:31:44.919 Received shutdown signal, test time was about 60.000000 seconds 00:31:44.919 00:31:44.919 Latency(us) 00:31:44.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.919 =================================================================================================================== 00:31:44.919 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:44.919 12:13:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 162894 00:31:44.919 [2024-07-21 12:13:43.733935] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:44.919 [2024-07-21 12:13:43.775960] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:45.486 ************************************ 00:31:45.486 END TEST raid5f_rebuild_test 00:31:45.486 ************************************ 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:31:45.486 00:31:45.486 real 0m20.146s 00:31:45.486 user 0m30.903s 00:31:45.486 sys 0m2.525s 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:45.486 12:13:44 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:31:45.486 12:13:44 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:31:45.486 12:13:44 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:45.486 12:13:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:45.486 ************************************ 00:31:45.486 START TEST raid5f_rebuild_test_sb 00:31:45.486 ************************************ 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 3 true false true 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=3 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:31:45.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=163426 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 163426 /var/tmp/spdk-raid.sock 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 163426 ']' 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:31:45.486 12:13:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.486 [2024-07-21 12:13:44.193000] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:31:45.486 [2024-07-21 12:13:44.194033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163426 ] 00:31:45.486 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:45.486 Zero copy mechanism will not be used. 00:31:45.744 [2024-07-21 12:13:44.357934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:45.744 [2024-07-21 12:13:44.433346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.744 [2024-07-21 12:13:44.503398] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:46.310 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:31:46.310 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:31:46.310 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:46.310 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:46.568 BaseBdev1_malloc 00:31:46.568 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:46.826 [2024-07-21 12:13:45.662289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:46.826 [2024-07-21 12:13:45.662391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.826 [2024-07-21 12:13:45.662439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:31:46.826 [2024-07-21 12:13:45.662489] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.826 [2024-07-21 12:13:45.664801] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.826 [2024-07-21 12:13:45.664853] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:46.826 BaseBdev1 00:31:46.826 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:46.826 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:47.090 BaseBdev2_malloc 00:31:47.090 12:13:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:47.359 [2024-07-21 12:13:46.131992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:47.359 [2024-07-21 12:13:46.132071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:47.359 [2024-07-21 12:13:46.132132] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:47.359 [2024-07-21 12:13:46.132170] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:47.359 [2024-07-21 12:13:46.134484] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:47.359 [2024-07-21 12:13:46.134531] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:47.359 BaseBdev2 00:31:47.359 12:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:31:47.359 12:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:47.616 BaseBdev3_malloc 00:31:47.616 12:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:31:47.874 [2024-07-21 12:13:46.602847] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:31:47.875 [2024-07-21 12:13:46.602909] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:47.875 [2024-07-21 12:13:46.602951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:31:47.875 [2024-07-21 12:13:46.602996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:47.875 [2024-07-21 12:13:46.605250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:47.875 [2024-07-21 12:13:46.605301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:47.875 BaseBdev3 00:31:47.875 12:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:31:48.131 spare_malloc 00:31:48.131 12:13:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:48.388 spare_delay 00:31:48.388 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:31:48.388 [2024-07-21 12:13:47.204399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:48.388 [2024-07-21 12:13:47.204479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:48.388 [2024-07-21 12:13:47.204511] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:48.388 [2024-07-21 12:13:47.204557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:48.388 [2024-07-21 12:13:47.206978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:48.388 [2024-07-21 12:13:47.207034] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:48.388 spare 00:31:48.388 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:31:48.645 [2024-07-21 12:13:47.460521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:48.645 [2024-07-21 12:13:47.462174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:48.645 [2024-07-21 12:13:47.462245] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:48.645 [2024-07-21 12:13:47.462434] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:31:48.645 [2024-07-21 12:13:47.462449] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:31:48.645 [2024-07-21 12:13:47.462563] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:31:48.645 [2024-07-21 12:13:47.463221] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:31:48.645 [2024-07-21 12:13:47.463243] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:31:48.645 [2024-07-21 12:13:47.463364] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:48.645 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.902 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:48.902 "name": "raid_bdev1", 00:31:48.902 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:48.902 "strip_size_kb": 64, 00:31:48.902 "state": "online", 00:31:48.902 "raid_level": "raid5f", 00:31:48.902 "superblock": true, 00:31:48.902 "num_base_bdevs": 3, 00:31:48.902 "num_base_bdevs_discovered": 3, 00:31:48.902 "num_base_bdevs_operational": 3, 00:31:48.902 "base_bdevs_list": [ 00:31:48.902 { 00:31:48.902 "name": "BaseBdev1", 00:31:48.902 "uuid": "ac44a2f8-257d-53fa-bfcc-43f52d88afc6", 00:31:48.902 "is_configured": true, 00:31:48.902 "data_offset": 2048, 00:31:48.902 "data_size": 63488 00:31:48.902 }, 00:31:48.902 { 00:31:48.902 "name": "BaseBdev2", 00:31:48.902 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:48.902 "is_configured": true, 00:31:48.902 "data_offset": 2048, 00:31:48.902 "data_size": 63488 00:31:48.902 }, 00:31:48.902 { 00:31:48.902 "name": "BaseBdev3", 00:31:48.902 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:48.902 "is_configured": true, 00:31:48.902 "data_offset": 2048, 00:31:48.902 "data_size": 63488 00:31:48.902 } 00:31:48.902 ] 00:31:48.902 }' 00:31:48.902 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:48.902 12:13:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.465 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:31:49.465 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:31:49.722 [2024-07-21 12:13:48.557713] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:49.722 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=126976 00:31:49.722 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:49.722 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:49.979 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:49.980 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:49.980 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:49.980 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:49.980 12:13:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:50.237 [2024-07-21 12:13:49.041693] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:31:50.237 /dev/nbd0 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:31:50.237 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:50.237 1+0 records in 00:31:50.237 1+0 records out 00:31:50.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633585 s, 6.5 MB/s 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=256 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 128 00:31:50.496 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:31:50.755 496+0 records in 00:31:50.755 496+0 records out 00:31:50.755 65011712 bytes (65 MB, 62 MiB) copied, 0.370138 s, 176 MB/s 00:31:50.755 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:31:50.755 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:31:50.755 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:50.755 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:50.755 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:50.755 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:50.755 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:51.014 [2024-07-21 12:13:49.758778] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:51.014 12:13:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:31:51.273 [2024-07-21 12:13:49.998246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:51.273 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.532 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:51.532 "name": "raid_bdev1", 00:31:51.532 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:51.532 "strip_size_kb": 64, 00:31:51.532 "state": "online", 00:31:51.532 "raid_level": "raid5f", 00:31:51.532 "superblock": true, 00:31:51.532 "num_base_bdevs": 3, 00:31:51.532 "num_base_bdevs_discovered": 2, 00:31:51.532 "num_base_bdevs_operational": 2, 00:31:51.532 "base_bdevs_list": [ 00:31:51.532 { 00:31:51.532 "name": null, 00:31:51.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.532 "is_configured": false, 00:31:51.532 "data_offset": 2048, 00:31:51.532 "data_size": 63488 00:31:51.532 }, 00:31:51.532 { 00:31:51.532 "name": "BaseBdev2", 00:31:51.532 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:51.532 "is_configured": true, 00:31:51.532 "data_offset": 2048, 00:31:51.532 "data_size": 63488 00:31:51.532 }, 00:31:51.532 { 00:31:51.532 "name": "BaseBdev3", 00:31:51.532 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:51.532 "is_configured": true, 00:31:51.532 "data_offset": 2048, 00:31:51.532 "data_size": 63488 00:31:51.532 } 00:31:51.532 ] 00:31:51.532 }' 00:31:51.532 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:51.532 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.100 12:13:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:52.360 [2024-07-21 12:13:51.130463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:52.360 [2024-07-21 12:13:51.136571] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028b70 00:31:52.360 [2024-07-21 12:13:51.138979] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:52.360 12:13:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:31:53.296 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:53.296 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:53.296 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:53.296 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:53.296 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:53.296 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:53.296 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.555 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:53.555 "name": "raid_bdev1", 00:31:53.555 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:53.555 "strip_size_kb": 64, 00:31:53.555 "state": "online", 00:31:53.555 "raid_level": "raid5f", 00:31:53.555 "superblock": true, 00:31:53.555 "num_base_bdevs": 3, 00:31:53.555 "num_base_bdevs_discovered": 3, 00:31:53.555 "num_base_bdevs_operational": 3, 00:31:53.555 "process": { 00:31:53.555 "type": "rebuild", 00:31:53.555 "target": "spare", 00:31:53.555 "progress": { 00:31:53.555 "blocks": 24576, 00:31:53.555 "percent": 19 00:31:53.555 } 00:31:53.555 }, 00:31:53.555 "base_bdevs_list": [ 00:31:53.555 { 00:31:53.555 "name": "spare", 00:31:53.555 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:31:53.555 "is_configured": true, 00:31:53.555 "data_offset": 2048, 00:31:53.555 "data_size": 63488 00:31:53.555 }, 00:31:53.555 { 00:31:53.555 "name": "BaseBdev2", 00:31:53.555 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:53.555 "is_configured": true, 00:31:53.555 "data_offset": 2048, 00:31:53.555 "data_size": 63488 00:31:53.555 }, 00:31:53.555 { 00:31:53.555 "name": "BaseBdev3", 00:31:53.555 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:53.555 "is_configured": true, 00:31:53.555 "data_offset": 2048, 00:31:53.555 "data_size": 63488 00:31:53.555 } 00:31:53.555 ] 00:31:53.555 }' 00:31:53.555 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:53.814 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:53.814 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:53.814 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:53.814 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:31:54.073 [2024-07-21 12:13:52.773613] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:54.073 [2024-07-21 12:13:52.854786] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:54.073 [2024-07-21 12:13:52.854869] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:54.073 [2024-07-21 12:13:52.854890] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:54.073 [2024-07-21 12:13:52.854898] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.073 12:13:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.330 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:31:54.330 "name": "raid_bdev1", 00:31:54.330 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:54.330 "strip_size_kb": 64, 00:31:54.330 "state": "online", 00:31:54.330 "raid_level": "raid5f", 00:31:54.330 "superblock": true, 00:31:54.330 "num_base_bdevs": 3, 00:31:54.330 "num_base_bdevs_discovered": 2, 00:31:54.330 "num_base_bdevs_operational": 2, 00:31:54.330 "base_bdevs_list": [ 00:31:54.330 { 00:31:54.330 "name": null, 00:31:54.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.331 "is_configured": false, 00:31:54.331 "data_offset": 2048, 00:31:54.331 "data_size": 63488 00:31:54.331 }, 00:31:54.331 { 00:31:54.331 "name": "BaseBdev2", 00:31:54.331 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:54.331 "is_configured": true, 00:31:54.331 "data_offset": 2048, 00:31:54.331 "data_size": 63488 00:31:54.331 }, 00:31:54.331 { 00:31:54.331 "name": "BaseBdev3", 00:31:54.331 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:54.331 "is_configured": true, 00:31:54.331 "data_offset": 2048, 00:31:54.331 "data_size": 63488 00:31:54.331 } 00:31:54.331 ] 00:31:54.331 }' 00:31:54.331 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:31:54.331 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.897 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:54.897 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:54.897 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:31:54.897 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:31:54.897 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:54.897 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:54.897 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:55.155 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:55.155 "name": "raid_bdev1", 00:31:55.155 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:55.155 "strip_size_kb": 64, 00:31:55.155 "state": "online", 00:31:55.155 "raid_level": "raid5f", 00:31:55.155 "superblock": true, 00:31:55.155 "num_base_bdevs": 3, 00:31:55.155 "num_base_bdevs_discovered": 2, 00:31:55.155 "num_base_bdevs_operational": 2, 00:31:55.155 "base_bdevs_list": [ 00:31:55.155 { 00:31:55.155 "name": null, 00:31:55.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:55.155 "is_configured": false, 00:31:55.155 "data_offset": 2048, 00:31:55.155 "data_size": 63488 00:31:55.155 }, 00:31:55.155 { 00:31:55.155 "name": "BaseBdev2", 00:31:55.155 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:55.155 "is_configured": true, 00:31:55.155 "data_offset": 2048, 00:31:55.155 "data_size": 63488 00:31:55.155 }, 00:31:55.155 { 00:31:55.155 "name": "BaseBdev3", 00:31:55.155 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:55.155 "is_configured": true, 00:31:55.155 "data_offset": 2048, 00:31:55.155 "data_size": 63488 00:31:55.155 } 00:31:55.155 ] 00:31:55.155 }' 00:31:55.155 12:13:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:55.413 12:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:31:55.413 12:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:55.413 12:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:31:55.413 12:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:31:55.413 [2024-07-21 12:13:54.261040] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:55.413 [2024-07-21 12:13:54.262994] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028d10 00:31:55.413 [2024-07-21 12:13:54.265217] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:55.413 12:13:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:56.787 "name": "raid_bdev1", 00:31:56.787 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:56.787 "strip_size_kb": 64, 00:31:56.787 "state": "online", 00:31:56.787 "raid_level": "raid5f", 00:31:56.787 "superblock": true, 00:31:56.787 "num_base_bdevs": 3, 00:31:56.787 "num_base_bdevs_discovered": 3, 00:31:56.787 "num_base_bdevs_operational": 3, 00:31:56.787 "process": { 00:31:56.787 "type": "rebuild", 00:31:56.787 "target": "spare", 00:31:56.787 "progress": { 00:31:56.787 "blocks": 24576, 00:31:56.787 "percent": 19 00:31:56.787 } 00:31:56.787 }, 00:31:56.787 "base_bdevs_list": [ 00:31:56.787 { 00:31:56.787 "name": "spare", 00:31:56.787 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:31:56.787 "is_configured": true, 00:31:56.787 "data_offset": 2048, 00:31:56.787 "data_size": 63488 00:31:56.787 }, 00:31:56.787 { 00:31:56.787 "name": "BaseBdev2", 00:31:56.787 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:56.787 "is_configured": true, 00:31:56.787 "data_offset": 2048, 00:31:56.787 "data_size": 63488 00:31:56.787 }, 00:31:56.787 { 00:31:56.787 "name": "BaseBdev3", 00:31:56.787 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:56.787 "is_configured": true, 00:31:56.787 "data_offset": 2048, 00:31:56.787 "data_size": 63488 00:31:56.787 } 00:31:56.787 ] 00:31:56.787 }' 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:31:56.787 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=3 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1120 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:56.787 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.045 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:57.045 "name": "raid_bdev1", 00:31:57.045 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:57.045 "strip_size_kb": 64, 00:31:57.045 "state": "online", 00:31:57.045 "raid_level": "raid5f", 00:31:57.045 "superblock": true, 00:31:57.045 "num_base_bdevs": 3, 00:31:57.045 "num_base_bdevs_discovered": 3, 00:31:57.045 "num_base_bdevs_operational": 3, 00:31:57.045 "process": { 00:31:57.045 "type": "rebuild", 00:31:57.045 "target": "spare", 00:31:57.045 "progress": { 00:31:57.045 "blocks": 32768, 00:31:57.045 "percent": 25 00:31:57.045 } 00:31:57.045 }, 00:31:57.045 "base_bdevs_list": [ 00:31:57.045 { 00:31:57.045 "name": "spare", 00:31:57.045 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:31:57.045 "is_configured": true, 00:31:57.045 "data_offset": 2048, 00:31:57.045 "data_size": 63488 00:31:57.045 }, 00:31:57.045 { 00:31:57.045 "name": "BaseBdev2", 00:31:57.045 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:57.045 "is_configured": true, 00:31:57.045 "data_offset": 2048, 00:31:57.045 "data_size": 63488 00:31:57.045 }, 00:31:57.045 { 00:31:57.045 "name": "BaseBdev3", 00:31:57.045 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:57.045 "is_configured": true, 00:31:57.045 "data_offset": 2048, 00:31:57.045 "data_size": 63488 00:31:57.045 } 00:31:57.045 ] 00:31:57.045 }' 00:31:57.045 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:57.303 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:57.303 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:57.303 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:57.303 12:13:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:58.235 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:58.491 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:58.491 "name": "raid_bdev1", 00:31:58.491 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:58.491 "strip_size_kb": 64, 00:31:58.491 "state": "online", 00:31:58.491 "raid_level": "raid5f", 00:31:58.491 "superblock": true, 00:31:58.491 "num_base_bdevs": 3, 00:31:58.491 "num_base_bdevs_discovered": 3, 00:31:58.491 "num_base_bdevs_operational": 3, 00:31:58.491 "process": { 00:31:58.491 "type": "rebuild", 00:31:58.492 "target": "spare", 00:31:58.492 "progress": { 00:31:58.492 "blocks": 59392, 00:31:58.492 "percent": 46 00:31:58.492 } 00:31:58.492 }, 00:31:58.492 "base_bdevs_list": [ 00:31:58.492 { 00:31:58.492 "name": "spare", 00:31:58.492 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:31:58.492 "is_configured": true, 00:31:58.492 "data_offset": 2048, 00:31:58.492 "data_size": 63488 00:31:58.492 }, 00:31:58.492 { 00:31:58.492 "name": "BaseBdev2", 00:31:58.492 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:58.492 "is_configured": true, 00:31:58.492 "data_offset": 2048, 00:31:58.492 "data_size": 63488 00:31:58.492 }, 00:31:58.492 { 00:31:58.492 "name": "BaseBdev3", 00:31:58.492 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:58.492 "is_configured": true, 00:31:58.492 "data_offset": 2048, 00:31:58.492 "data_size": 63488 00:31:58.492 } 00:31:58.492 ] 00:31:58.492 }' 00:31:58.492 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:58.492 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:58.492 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:58.748 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:58.748 12:13:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:31:59.678 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.934 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:31:59.934 "name": "raid_bdev1", 00:31:59.934 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:31:59.934 "strip_size_kb": 64, 00:31:59.934 "state": "online", 00:31:59.934 "raid_level": "raid5f", 00:31:59.934 "superblock": true, 00:31:59.934 "num_base_bdevs": 3, 00:31:59.934 "num_base_bdevs_discovered": 3, 00:31:59.934 "num_base_bdevs_operational": 3, 00:31:59.934 "process": { 00:31:59.934 "type": "rebuild", 00:31:59.934 "target": "spare", 00:31:59.934 "progress": { 00:31:59.934 "blocks": 86016, 00:31:59.934 "percent": 67 00:31:59.934 } 00:31:59.934 }, 00:31:59.935 "base_bdevs_list": [ 00:31:59.935 { 00:31:59.935 "name": "spare", 00:31:59.935 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:31:59.935 "is_configured": true, 00:31:59.935 "data_offset": 2048, 00:31:59.935 "data_size": 63488 00:31:59.935 }, 00:31:59.935 { 00:31:59.935 "name": "BaseBdev2", 00:31:59.935 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:31:59.935 "is_configured": true, 00:31:59.935 "data_offset": 2048, 00:31:59.935 "data_size": 63488 00:31:59.935 }, 00:31:59.935 { 00:31:59.935 "name": "BaseBdev3", 00:31:59.935 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:31:59.935 "is_configured": true, 00:31:59.935 "data_offset": 2048, 00:31:59.935 "data_size": 63488 00:31:59.935 } 00:31:59.935 ] 00:31:59.935 }' 00:31:59.935 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:31:59.935 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:59.935 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:31:59.935 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:31:59.935 12:13:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:00.869 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.127 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:01.127 "name": "raid_bdev1", 00:32:01.127 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:01.127 "strip_size_kb": 64, 00:32:01.127 "state": "online", 00:32:01.127 "raid_level": "raid5f", 00:32:01.127 "superblock": true, 00:32:01.127 "num_base_bdevs": 3, 00:32:01.127 "num_base_bdevs_discovered": 3, 00:32:01.127 "num_base_bdevs_operational": 3, 00:32:01.127 "process": { 00:32:01.127 "type": "rebuild", 00:32:01.127 "target": "spare", 00:32:01.127 "progress": { 00:32:01.127 "blocks": 114688, 00:32:01.127 "percent": 90 00:32:01.127 } 00:32:01.127 }, 00:32:01.127 "base_bdevs_list": [ 00:32:01.127 { 00:32:01.127 "name": "spare", 00:32:01.127 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:01.127 "is_configured": true, 00:32:01.127 "data_offset": 2048, 00:32:01.127 "data_size": 63488 00:32:01.127 }, 00:32:01.127 { 00:32:01.127 "name": "BaseBdev2", 00:32:01.127 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:01.127 "is_configured": true, 00:32:01.127 "data_offset": 2048, 00:32:01.127 "data_size": 63488 00:32:01.127 }, 00:32:01.127 { 00:32:01.127 "name": "BaseBdev3", 00:32:01.127 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:01.127 "is_configured": true, 00:32:01.127 "data_offset": 2048, 00:32:01.127 "data_size": 63488 00:32:01.127 } 00:32:01.127 ] 00:32:01.127 }' 00:32:01.127 12:13:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:01.384 12:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:01.384 12:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:01.384 12:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:01.384 12:14:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:32:01.949 [2024-07-21 12:14:00.520581] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:01.949 [2024-07-21 12:14:00.520679] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:01.949 [2024-07-21 12:14:00.520837] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.513 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:02.513 "name": "raid_bdev1", 00:32:02.513 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:02.513 "strip_size_kb": 64, 00:32:02.513 "state": "online", 00:32:02.513 "raid_level": "raid5f", 00:32:02.513 "superblock": true, 00:32:02.513 "num_base_bdevs": 3, 00:32:02.513 "num_base_bdevs_discovered": 3, 00:32:02.513 "num_base_bdevs_operational": 3, 00:32:02.513 "base_bdevs_list": [ 00:32:02.513 { 00:32:02.513 "name": "spare", 00:32:02.513 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:02.513 "is_configured": true, 00:32:02.513 "data_offset": 2048, 00:32:02.513 "data_size": 63488 00:32:02.513 }, 00:32:02.513 { 00:32:02.513 "name": "BaseBdev2", 00:32:02.513 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:02.514 "is_configured": true, 00:32:02.514 "data_offset": 2048, 00:32:02.514 "data_size": 63488 00:32:02.514 }, 00:32:02.514 { 00:32:02.514 "name": "BaseBdev3", 00:32:02.514 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:02.514 "is_configured": true, 00:32:02.514 "data_offset": 2048, 00:32:02.514 "data_size": 63488 00:32:02.514 } 00:32:02.514 ] 00:32:02.514 }' 00:32:02.514 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:02.514 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:02.514 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.770 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:03.027 "name": "raid_bdev1", 00:32:03.027 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:03.027 "strip_size_kb": 64, 00:32:03.027 "state": "online", 00:32:03.027 "raid_level": "raid5f", 00:32:03.027 "superblock": true, 00:32:03.027 "num_base_bdevs": 3, 00:32:03.027 "num_base_bdevs_discovered": 3, 00:32:03.027 "num_base_bdevs_operational": 3, 00:32:03.027 "base_bdevs_list": [ 00:32:03.027 { 00:32:03.027 "name": "spare", 00:32:03.027 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:03.027 "is_configured": true, 00:32:03.027 "data_offset": 2048, 00:32:03.027 "data_size": 63488 00:32:03.027 }, 00:32:03.027 { 00:32:03.027 "name": "BaseBdev2", 00:32:03.027 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:03.027 "is_configured": true, 00:32:03.027 "data_offset": 2048, 00:32:03.027 "data_size": 63488 00:32:03.027 }, 00:32:03.027 { 00:32:03.027 "name": "BaseBdev3", 00:32:03.027 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:03.027 "is_configured": true, 00:32:03.027 "data_offset": 2048, 00:32:03.027 "data_size": 63488 00:32:03.027 } 00:32:03.027 ] 00:32:03.027 }' 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:03.027 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.285 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:03.285 "name": "raid_bdev1", 00:32:03.285 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:03.285 "strip_size_kb": 64, 00:32:03.285 "state": "online", 00:32:03.285 "raid_level": "raid5f", 00:32:03.285 "superblock": true, 00:32:03.285 "num_base_bdevs": 3, 00:32:03.285 "num_base_bdevs_discovered": 3, 00:32:03.285 "num_base_bdevs_operational": 3, 00:32:03.285 "base_bdevs_list": [ 00:32:03.285 { 00:32:03.285 "name": "spare", 00:32:03.285 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:03.285 "is_configured": true, 00:32:03.285 "data_offset": 2048, 00:32:03.285 "data_size": 63488 00:32:03.285 }, 00:32:03.285 { 00:32:03.285 "name": "BaseBdev2", 00:32:03.285 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:03.285 "is_configured": true, 00:32:03.285 "data_offset": 2048, 00:32:03.285 "data_size": 63488 00:32:03.285 }, 00:32:03.285 { 00:32:03.285 "name": "BaseBdev3", 00:32:03.285 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:03.285 "is_configured": true, 00:32:03.285 "data_offset": 2048, 00:32:03.285 "data_size": 63488 00:32:03.285 } 00:32:03.285 ] 00:32:03.285 }' 00:32:03.285 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:03.285 12:14:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:03.849 12:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:32:04.107 [2024-07-21 12:14:02.808312] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:04.107 [2024-07-21 12:14:02.808339] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:04.107 [2024-07-21 12:14:02.808417] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:04.107 [2024-07-21 12:14:02.808504] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:04.107 [2024-07-21 12:14:02.808518] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:32:04.107 12:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:04.107 12:14:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:04.366 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:04.625 /dev/nbd0 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:04.625 1+0 records in 00:32:04.625 1+0 records out 00:32:04.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382162 s, 10.7 MB/s 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:04.625 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:32:04.882 /dev/nbd1 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:04.882 1+0 records in 00:32:04.882 1+0 records out 00:32:04.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000608668 s, 6.7 MB/s 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:04.882 12:14:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:32:05.447 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:05.705 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:05.963 [2024-07-21 12:14:04.595812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:05.963 [2024-07-21 12:14:04.595900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:05.963 [2024-07-21 12:14:04.595942] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:05.963 [2024-07-21 12:14:04.595964] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:05.963 [2024-07-21 12:14:04.598332] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:05.963 [2024-07-21 12:14:04.598389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:05.963 [2024-07-21 12:14:04.598476] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:05.963 [2024-07-21 12:14:04.598536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:05.963 [2024-07-21 12:14:04.598718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:05.963 [2024-07-21 12:14:04.598851] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:05.963 spare 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.963 [2024-07-21 12:14:04.698940] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:32:05.963 [2024-07-21 12:14:04.698960] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:32:05.963 [2024-07-21 12:14:04.699081] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047630 00:32:05.963 [2024-07-21 12:14:04.699819] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:32:05.963 [2024-07-21 12:14:04.699840] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:32:05.963 [2024-07-21 12:14:04.699976] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.963 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:05.963 "name": "raid_bdev1", 00:32:05.963 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:05.963 "strip_size_kb": 64, 00:32:05.963 "state": "online", 00:32:05.963 "raid_level": "raid5f", 00:32:05.963 "superblock": true, 00:32:05.963 "num_base_bdevs": 3, 00:32:05.963 "num_base_bdevs_discovered": 3, 00:32:05.963 "num_base_bdevs_operational": 3, 00:32:05.963 "base_bdevs_list": [ 00:32:05.963 { 00:32:05.963 "name": "spare", 00:32:05.963 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:05.963 "is_configured": true, 00:32:05.963 "data_offset": 2048, 00:32:05.963 "data_size": 63488 00:32:05.963 }, 00:32:05.963 { 00:32:05.963 "name": "BaseBdev2", 00:32:05.963 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:05.963 "is_configured": true, 00:32:05.963 "data_offset": 2048, 00:32:05.963 "data_size": 63488 00:32:05.964 }, 00:32:05.964 { 00:32:05.964 "name": "BaseBdev3", 00:32:05.964 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:05.964 "is_configured": true, 00:32:05.964 "data_offset": 2048, 00:32:05.964 "data_size": 63488 00:32:05.964 } 00:32:05.964 ] 00:32:05.964 }' 00:32:05.964 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:05.964 12:14:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:06.896 "name": "raid_bdev1", 00:32:06.896 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:06.896 "strip_size_kb": 64, 00:32:06.896 "state": "online", 00:32:06.896 "raid_level": "raid5f", 00:32:06.896 "superblock": true, 00:32:06.896 "num_base_bdevs": 3, 00:32:06.896 "num_base_bdevs_discovered": 3, 00:32:06.896 "num_base_bdevs_operational": 3, 00:32:06.896 "base_bdevs_list": [ 00:32:06.896 { 00:32:06.896 "name": "spare", 00:32:06.896 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:06.896 "is_configured": true, 00:32:06.896 "data_offset": 2048, 00:32:06.896 "data_size": 63488 00:32:06.896 }, 00:32:06.896 { 00:32:06.896 "name": "BaseBdev2", 00:32:06.896 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:06.896 "is_configured": true, 00:32:06.896 "data_offset": 2048, 00:32:06.896 "data_size": 63488 00:32:06.896 }, 00:32:06.896 { 00:32:06.896 "name": "BaseBdev3", 00:32:06.896 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:06.896 "is_configured": true, 00:32:06.896 "data_offset": 2048, 00:32:06.896 "data_size": 63488 00:32:06.896 } 00:32:06.896 ] 00:32:06.896 }' 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:06.896 12:14:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:07.153 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:32:07.153 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:32:07.422 [2024-07-21 12:14:06.244293] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:07.422 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:07.680 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:07.680 "name": "raid_bdev1", 00:32:07.680 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:07.680 "strip_size_kb": 64, 00:32:07.680 "state": "online", 00:32:07.680 "raid_level": "raid5f", 00:32:07.680 "superblock": true, 00:32:07.680 "num_base_bdevs": 3, 00:32:07.680 "num_base_bdevs_discovered": 2, 00:32:07.680 "num_base_bdevs_operational": 2, 00:32:07.680 "base_bdevs_list": [ 00:32:07.680 { 00:32:07.680 "name": null, 00:32:07.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:07.680 "is_configured": false, 00:32:07.680 "data_offset": 2048, 00:32:07.680 "data_size": 63488 00:32:07.680 }, 00:32:07.680 { 00:32:07.680 "name": "BaseBdev2", 00:32:07.680 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:07.680 "is_configured": true, 00:32:07.680 "data_offset": 2048, 00:32:07.680 "data_size": 63488 00:32:07.680 }, 00:32:07.680 { 00:32:07.680 "name": "BaseBdev3", 00:32:07.680 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:07.680 "is_configured": true, 00:32:07.680 "data_offset": 2048, 00:32:07.680 "data_size": 63488 00:32:07.680 } 00:32:07.680 ] 00:32:07.680 }' 00:32:07.680 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:07.680 12:14:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:08.612 12:14:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:32:08.612 [2024-07-21 12:14:07.360498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:08.612 [2024-07-21 12:14:07.360625] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:08.612 [2024-07-21 12:14:07.360642] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:08.612 [2024-07-21 12:14:07.360694] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:08.612 [2024-07-21 12:14:07.366502] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:32:08.612 [2024-07-21 12:14:07.368759] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:08.612 12:14:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:32:09.543 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:09.543 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:09.543 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:09.543 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:09.543 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:09.543 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:09.543 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:09.801 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:09.801 "name": "raid_bdev1", 00:32:09.801 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:09.801 "strip_size_kb": 64, 00:32:09.801 "state": "online", 00:32:09.801 "raid_level": "raid5f", 00:32:09.801 "superblock": true, 00:32:09.801 "num_base_bdevs": 3, 00:32:09.801 "num_base_bdevs_discovered": 3, 00:32:09.801 "num_base_bdevs_operational": 3, 00:32:09.801 "process": { 00:32:09.801 "type": "rebuild", 00:32:09.801 "target": "spare", 00:32:09.801 "progress": { 00:32:09.801 "blocks": 22528, 00:32:09.801 "percent": 17 00:32:09.801 } 00:32:09.801 }, 00:32:09.801 "base_bdevs_list": [ 00:32:09.801 { 00:32:09.801 "name": "spare", 00:32:09.801 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:09.801 "is_configured": true, 00:32:09.801 "data_offset": 2048, 00:32:09.801 "data_size": 63488 00:32:09.801 }, 00:32:09.801 { 00:32:09.801 "name": "BaseBdev2", 00:32:09.801 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:09.801 "is_configured": true, 00:32:09.801 "data_offset": 2048, 00:32:09.801 "data_size": 63488 00:32:09.801 }, 00:32:09.801 { 00:32:09.801 "name": "BaseBdev3", 00:32:09.801 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:09.801 "is_configured": true, 00:32:09.801 "data_offset": 2048, 00:32:09.801 "data_size": 63488 00:32:09.801 } 00:32:09.801 ] 00:32:09.801 }' 00:32:09.801 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:09.801 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:09.801 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:10.059 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:10.059 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:10.059 [2024-07-21 12:14:08.914879] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:10.317 [2024-07-21 12:14:08.982861] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:10.317 [2024-07-21 12:14:08.982933] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:10.317 [2024-07-21 12:14:08.982953] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:10.317 [2024-07-21 12:14:08.982961] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:10.317 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:10.317 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:10.317 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:10.317 12:14:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:10.317 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.576 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:10.576 "name": "raid_bdev1", 00:32:10.576 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:10.576 "strip_size_kb": 64, 00:32:10.576 "state": "online", 00:32:10.576 "raid_level": "raid5f", 00:32:10.576 "superblock": true, 00:32:10.576 "num_base_bdevs": 3, 00:32:10.576 "num_base_bdevs_discovered": 2, 00:32:10.576 "num_base_bdevs_operational": 2, 00:32:10.576 "base_bdevs_list": [ 00:32:10.576 { 00:32:10.576 "name": null, 00:32:10.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:10.576 "is_configured": false, 00:32:10.576 "data_offset": 2048, 00:32:10.576 "data_size": 63488 00:32:10.576 }, 00:32:10.576 { 00:32:10.576 "name": "BaseBdev2", 00:32:10.576 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:10.576 "is_configured": true, 00:32:10.576 "data_offset": 2048, 00:32:10.576 "data_size": 63488 00:32:10.576 }, 00:32:10.576 { 00:32:10.576 "name": "BaseBdev3", 00:32:10.576 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:10.576 "is_configured": true, 00:32:10.576 "data_offset": 2048, 00:32:10.576 "data_size": 63488 00:32:10.576 } 00:32:10.576 ] 00:32:10.576 }' 00:32:10.576 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:10.576 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:11.142 12:14:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:32:11.142 [2024-07-21 12:14:09.997691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:11.142 [2024-07-21 12:14:09.997774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:11.142 [2024-07-21 12:14:09.997813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:32:11.142 [2024-07-21 12:14:09.997843] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:11.142 [2024-07-21 12:14:09.998348] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:11.142 [2024-07-21 12:14:09.998393] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:11.142 [2024-07-21 12:14:09.998503] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:11.142 [2024-07-21 12:14:09.998519] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:11.142 [2024-07-21 12:14:09.998528] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:11.142 [2024-07-21 12:14:09.998591] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:11.142 [2024-07-21 12:14:10.001975] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047b10 00:32:11.142 spare 00:32:11.142 [2024-07-21 12:14:10.004043] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:11.399 12:14:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:32:12.332 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:12.332 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:12.332 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:32:12.332 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:32:12.332 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:12.332 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.332 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.590 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:12.590 "name": "raid_bdev1", 00:32:12.590 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:12.590 "strip_size_kb": 64, 00:32:12.590 "state": "online", 00:32:12.590 "raid_level": "raid5f", 00:32:12.590 "superblock": true, 00:32:12.590 "num_base_bdevs": 3, 00:32:12.590 "num_base_bdevs_discovered": 3, 00:32:12.590 "num_base_bdevs_operational": 3, 00:32:12.590 "process": { 00:32:12.590 "type": "rebuild", 00:32:12.590 "target": "spare", 00:32:12.590 "progress": { 00:32:12.590 "blocks": 22528, 00:32:12.590 "percent": 17 00:32:12.590 } 00:32:12.590 }, 00:32:12.590 "base_bdevs_list": [ 00:32:12.590 { 00:32:12.590 "name": "spare", 00:32:12.590 "uuid": "bfb8805a-a328-5396-bf18-e219991fe4ae", 00:32:12.590 "is_configured": true, 00:32:12.590 "data_offset": 2048, 00:32:12.590 "data_size": 63488 00:32:12.590 }, 00:32:12.590 { 00:32:12.590 "name": "BaseBdev2", 00:32:12.590 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:12.590 "is_configured": true, 00:32:12.590 "data_offset": 2048, 00:32:12.590 "data_size": 63488 00:32:12.590 }, 00:32:12.590 { 00:32:12.590 "name": "BaseBdev3", 00:32:12.590 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:12.590 "is_configured": true, 00:32:12.590 "data_offset": 2048, 00:32:12.590 "data_size": 63488 00:32:12.590 } 00:32:12.590 ] 00:32:12.590 }' 00:32:12.590 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:12.590 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:12.590 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:12.590 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:32:12.590 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:32:12.848 [2024-07-21 12:14:11.551197] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:12.848 [2024-07-21 12:14:11.618821] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:12.848 [2024-07-21 12:14:11.618987] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.848 [2024-07-21 12:14:11.619016] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:12.848 [2024-07-21 12:14:11.619028] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:12.848 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.105 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:13.105 "name": "raid_bdev1", 00:32:13.105 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:13.105 "strip_size_kb": 64, 00:32:13.105 "state": "online", 00:32:13.105 "raid_level": "raid5f", 00:32:13.105 "superblock": true, 00:32:13.105 "num_base_bdevs": 3, 00:32:13.105 "num_base_bdevs_discovered": 2, 00:32:13.105 "num_base_bdevs_operational": 2, 00:32:13.105 "base_bdevs_list": [ 00:32:13.105 { 00:32:13.105 "name": null, 00:32:13.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:13.105 "is_configured": false, 00:32:13.105 "data_offset": 2048, 00:32:13.105 "data_size": 63488 00:32:13.105 }, 00:32:13.105 { 00:32:13.105 "name": "BaseBdev2", 00:32:13.105 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:13.105 "is_configured": true, 00:32:13.105 "data_offset": 2048, 00:32:13.105 "data_size": 63488 00:32:13.105 }, 00:32:13.105 { 00:32:13.106 "name": "BaseBdev3", 00:32:13.106 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:13.106 "is_configured": true, 00:32:13.106 "data_offset": 2048, 00:32:13.106 "data_size": 63488 00:32:13.106 } 00:32:13.106 ] 00:32:13.106 }' 00:32:13.106 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:13.106 12:14:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:14.040 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:14.041 "name": "raid_bdev1", 00:32:14.041 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:14.041 "strip_size_kb": 64, 00:32:14.041 "state": "online", 00:32:14.041 "raid_level": "raid5f", 00:32:14.041 "superblock": true, 00:32:14.041 "num_base_bdevs": 3, 00:32:14.041 "num_base_bdevs_discovered": 2, 00:32:14.041 "num_base_bdevs_operational": 2, 00:32:14.041 "base_bdevs_list": [ 00:32:14.041 { 00:32:14.041 "name": null, 00:32:14.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:14.041 "is_configured": false, 00:32:14.041 "data_offset": 2048, 00:32:14.041 "data_size": 63488 00:32:14.041 }, 00:32:14.041 { 00:32:14.041 "name": "BaseBdev2", 00:32:14.041 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:14.041 "is_configured": true, 00:32:14.041 "data_offset": 2048, 00:32:14.041 "data_size": 63488 00:32:14.041 }, 00:32:14.041 { 00:32:14.041 "name": "BaseBdev3", 00:32:14.041 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:14.041 "is_configured": true, 00:32:14.041 "data_offset": 2048, 00:32:14.041 "data_size": 63488 00:32:14.041 } 00:32:14.041 ] 00:32:14.041 }' 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:14.041 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:14.300 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:14.300 12:14:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:32:14.300 12:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:14.558 [2024-07-21 12:14:13.353921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:14.558 [2024-07-21 12:14:13.353995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:14.558 [2024-07-21 12:14:13.354050] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:32:14.558 [2024-07-21 12:14:13.354072] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:14.558 [2024-07-21 12:14:13.354599] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:14.558 [2024-07-21 12:14:13.354702] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:14.558 [2024-07-21 12:14:13.354812] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:14.558 [2024-07-21 12:14:13.354860] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:14.558 [2024-07-21 12:14:13.354869] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:14.558 BaseBdev1 00:32:14.558 12:14:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:15.932 "name": "raid_bdev1", 00:32:15.932 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:15.932 "strip_size_kb": 64, 00:32:15.932 "state": "online", 00:32:15.932 "raid_level": "raid5f", 00:32:15.932 "superblock": true, 00:32:15.932 "num_base_bdevs": 3, 00:32:15.932 "num_base_bdevs_discovered": 2, 00:32:15.932 "num_base_bdevs_operational": 2, 00:32:15.932 "base_bdevs_list": [ 00:32:15.932 { 00:32:15.932 "name": null, 00:32:15.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:15.932 "is_configured": false, 00:32:15.932 "data_offset": 2048, 00:32:15.932 "data_size": 63488 00:32:15.932 }, 00:32:15.932 { 00:32:15.932 "name": "BaseBdev2", 00:32:15.932 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:15.932 "is_configured": true, 00:32:15.932 "data_offset": 2048, 00:32:15.932 "data_size": 63488 00:32:15.932 }, 00:32:15.932 { 00:32:15.932 "name": "BaseBdev3", 00:32:15.932 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:15.932 "is_configured": true, 00:32:15.932 "data_offset": 2048, 00:32:15.932 "data_size": 63488 00:32:15.932 } 00:32:15.932 ] 00:32:15.932 }' 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:15.932 12:14:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:16.496 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:16.496 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:16.496 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:16.496 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:16.496 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:16.496 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.496 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:16.754 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:16.754 "name": "raid_bdev1", 00:32:16.754 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:16.754 "strip_size_kb": 64, 00:32:16.754 "state": "online", 00:32:16.754 "raid_level": "raid5f", 00:32:16.754 "superblock": true, 00:32:16.754 "num_base_bdevs": 3, 00:32:16.754 "num_base_bdevs_discovered": 2, 00:32:16.754 "num_base_bdevs_operational": 2, 00:32:16.754 "base_bdevs_list": [ 00:32:16.754 { 00:32:16.754 "name": null, 00:32:16.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:16.754 "is_configured": false, 00:32:16.754 "data_offset": 2048, 00:32:16.754 "data_size": 63488 00:32:16.754 }, 00:32:16.754 { 00:32:16.754 "name": "BaseBdev2", 00:32:16.754 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:16.754 "is_configured": true, 00:32:16.754 "data_offset": 2048, 00:32:16.754 "data_size": 63488 00:32:16.754 }, 00:32:16.754 { 00:32:16.754 "name": "BaseBdev3", 00:32:16.754 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:16.754 "is_configured": true, 00:32:16.754 "data_offset": 2048, 00:32:16.754 "data_size": 63488 00:32:16.754 } 00:32:16.754 ] 00:32:16.754 }' 00:32:16.754 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:16.754 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:16.754 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:16.754 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:16.754 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:16.754 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:32:17.012 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:17.012 [2024-07-21 12:14:15.878394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:17.012 [2024-07-21 12:14:15.878605] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:17.012 [2024-07-21 12:14:15.878634] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:17.269 request: 00:32:17.269 { 00:32:17.269 "raid_bdev": "raid_bdev1", 00:32:17.269 "base_bdev": "BaseBdev1", 00:32:17.269 "method": "bdev_raid_add_base_bdev", 00:32:17.269 "req_id": 1 00:32:17.269 } 00:32:17.269 Got JSON-RPC error response 00:32:17.269 response: 00:32:17.269 { 00:32:17.269 "code": -22, 00:32:17.269 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:17.269 } 00:32:17.269 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:32:17.269 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:17.269 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:17.269 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:17.269 12:14:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:18.226 12:14:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.484 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:18.484 "name": "raid_bdev1", 00:32:18.484 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:18.484 "strip_size_kb": 64, 00:32:18.484 "state": "online", 00:32:18.484 "raid_level": "raid5f", 00:32:18.484 "superblock": true, 00:32:18.484 "num_base_bdevs": 3, 00:32:18.484 "num_base_bdevs_discovered": 2, 00:32:18.484 "num_base_bdevs_operational": 2, 00:32:18.484 "base_bdevs_list": [ 00:32:18.484 { 00:32:18.484 "name": null, 00:32:18.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:18.484 "is_configured": false, 00:32:18.484 "data_offset": 2048, 00:32:18.484 "data_size": 63488 00:32:18.484 }, 00:32:18.484 { 00:32:18.484 "name": "BaseBdev2", 00:32:18.484 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:18.484 "is_configured": true, 00:32:18.484 "data_offset": 2048, 00:32:18.484 "data_size": 63488 00:32:18.484 }, 00:32:18.484 { 00:32:18.484 "name": "BaseBdev3", 00:32:18.484 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:18.484 "is_configured": true, 00:32:18.484 "data_offset": 2048, 00:32:18.484 "data_size": 63488 00:32:18.484 } 00:32:18.484 ] 00:32:18.484 }' 00:32:18.484 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:18.484 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.049 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:19.049 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:32:19.049 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:32:19.049 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:32:19.049 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:32:19.049 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:19.049 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:19.321 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:32:19.321 "name": "raid_bdev1", 00:32:19.321 "uuid": "0688f257-1dc8-4de5-acec-b79e6815db8d", 00:32:19.321 "strip_size_kb": 64, 00:32:19.321 "state": "online", 00:32:19.321 "raid_level": "raid5f", 00:32:19.321 "superblock": true, 00:32:19.321 "num_base_bdevs": 3, 00:32:19.321 "num_base_bdevs_discovered": 2, 00:32:19.321 "num_base_bdevs_operational": 2, 00:32:19.321 "base_bdevs_list": [ 00:32:19.321 { 00:32:19.321 "name": null, 00:32:19.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:19.321 "is_configured": false, 00:32:19.321 "data_offset": 2048, 00:32:19.321 "data_size": 63488 00:32:19.321 }, 00:32:19.321 { 00:32:19.321 "name": "BaseBdev2", 00:32:19.321 "uuid": "3b787984-51e9-5868-9a72-f100fb8b6ac8", 00:32:19.321 "is_configured": true, 00:32:19.321 "data_offset": 2048, 00:32:19.321 "data_size": 63488 00:32:19.321 }, 00:32:19.321 { 00:32:19.321 "name": "BaseBdev3", 00:32:19.321 "uuid": "168e5e6c-63bb-531a-8dd2-03211e9f9724", 00:32:19.321 "is_configured": true, 00:32:19.321 "data_offset": 2048, 00:32:19.321 "data_size": 63488 00:32:19.321 } 00:32:19.321 ] 00:32:19.321 }' 00:32:19.321 12:14:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 163426 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 163426 ']' 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 163426 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 163426 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 163426' 00:32:19.321 killing process with pid 163426 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 163426 00:32:19.321 Received shutdown signal, test time was about 60.000000 seconds 00:32:19.321 00:32:19.321 Latency(us) 00:32:19.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.321 =================================================================================================================== 00:32:19.321 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:19.321 [2024-07-21 12:14:18.123441] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:19.321 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 163426 00:32:19.321 [2024-07-21 12:14:18.123588] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:19.321 [2024-07-21 12:14:18.123677] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:19.321 [2024-07-21 12:14:18.123691] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:32:19.321 [2024-07-21 12:14:18.168778] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:19.610 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:32:19.610 00:32:19.610 real 0m34.331s 00:32:19.610 user 0m54.876s 00:32:19.610 ************************************ 00:32:19.610 END TEST raid5f_rebuild_test_sb 00:32:19.610 ************************************ 00:32:19.610 sys 0m3.572s 00:32:19.610 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:19.610 12:14:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:19.888 12:14:18 bdev_raid -- bdev/bdev_raid.sh@885 -- # for n in {3..4} 00:32:19.889 12:14:18 bdev_raid -- bdev/bdev_raid.sh@886 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:32:19.889 12:14:18 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:32:19.889 12:14:18 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:19.889 12:14:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:19.889 ************************************ 00:32:19.889 START TEST raid5f_state_function_test 00:32:19.889 ************************************ 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 false 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=164346 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 164346' 00:32:19.889 Process raid pid: 164346 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 164346 /var/tmp/spdk-raid.sock 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@827 -- # '[' -z 164346 ']' 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:19.889 12:14:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:19.889 [2024-07-21 12:14:18.597957] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:19.889 [2024-07-21 12:14:18.598208] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.155 [2024-07-21 12:14:18.768772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.155 [2024-07-21 12:14:18.849974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.155 [2024-07-21 12:14:18.921636] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # return 0 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:21.091 [2024-07-21 12:14:19.855345] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:21.091 [2024-07-21 12:14:19.855426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:21.091 [2024-07-21 12:14:19.855440] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:21.091 [2024-07-21 12:14:19.855459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:21.091 [2024-07-21 12:14:19.855466] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:21.091 [2024-07-21 12:14:19.855505] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:21.091 [2024-07-21 12:14:19.855515] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:21.091 [2024-07-21 12:14:19.855537] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:21.091 12:14:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:21.350 12:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:21.350 "name": "Existed_Raid", 00:32:21.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.350 "strip_size_kb": 64, 00:32:21.350 "state": "configuring", 00:32:21.350 "raid_level": "raid5f", 00:32:21.350 "superblock": false, 00:32:21.350 "num_base_bdevs": 4, 00:32:21.350 "num_base_bdevs_discovered": 0, 00:32:21.350 "num_base_bdevs_operational": 4, 00:32:21.350 "base_bdevs_list": [ 00:32:21.350 { 00:32:21.350 "name": "BaseBdev1", 00:32:21.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.350 "is_configured": false, 00:32:21.350 "data_offset": 0, 00:32:21.350 "data_size": 0 00:32:21.350 }, 00:32:21.350 { 00:32:21.350 "name": "BaseBdev2", 00:32:21.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.350 "is_configured": false, 00:32:21.350 "data_offset": 0, 00:32:21.350 "data_size": 0 00:32:21.350 }, 00:32:21.350 { 00:32:21.350 "name": "BaseBdev3", 00:32:21.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.350 "is_configured": false, 00:32:21.350 "data_offset": 0, 00:32:21.350 "data_size": 0 00:32:21.350 }, 00:32:21.350 { 00:32:21.350 "name": "BaseBdev4", 00:32:21.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:21.350 "is_configured": false, 00:32:21.350 "data_offset": 0, 00:32:21.350 "data_size": 0 00:32:21.350 } 00:32:21.350 ] 00:32:21.350 }' 00:32:21.350 12:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:21.350 12:14:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:21.917 12:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:22.175 [2024-07-21 12:14:20.895392] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:22.175 [2024-07-21 12:14:20.895439] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:32:22.175 12:14:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:22.434 [2024-07-21 12:14:21.083428] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:22.434 [2024-07-21 12:14:21.083489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:22.434 [2024-07-21 12:14:21.083501] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:22.434 [2024-07-21 12:14:21.083548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:22.434 [2024-07-21 12:14:21.083557] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:22.434 [2024-07-21 12:14:21.083572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:22.434 [2024-07-21 12:14:21.083579] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:22.434 [2024-07-21 12:14:21.083598] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:22.434 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:22.434 [2024-07-21 12:14:21.285325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:22.434 BaseBdev1 00:32:22.434 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:22.434 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:22.434 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:22.434 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:22.693 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:22.693 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:22.693 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:22.693 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:22.951 [ 00:32:22.951 { 00:32:22.951 "name": "BaseBdev1", 00:32:22.951 "aliases": [ 00:32:22.951 "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf" 00:32:22.951 ], 00:32:22.951 "product_name": "Malloc disk", 00:32:22.951 "block_size": 512, 00:32:22.951 "num_blocks": 65536, 00:32:22.951 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:22.951 "assigned_rate_limits": { 00:32:22.951 "rw_ios_per_sec": 0, 00:32:22.951 "rw_mbytes_per_sec": 0, 00:32:22.951 "r_mbytes_per_sec": 0, 00:32:22.951 "w_mbytes_per_sec": 0 00:32:22.951 }, 00:32:22.951 "claimed": true, 00:32:22.951 "claim_type": "exclusive_write", 00:32:22.951 "zoned": false, 00:32:22.951 "supported_io_types": { 00:32:22.951 "read": true, 00:32:22.951 "write": true, 00:32:22.951 "unmap": true, 00:32:22.951 "write_zeroes": true, 00:32:22.951 "flush": true, 00:32:22.951 "reset": true, 00:32:22.951 "compare": false, 00:32:22.951 "compare_and_write": false, 00:32:22.951 "abort": true, 00:32:22.951 "nvme_admin": false, 00:32:22.951 "nvme_io": false 00:32:22.951 }, 00:32:22.951 "memory_domains": [ 00:32:22.951 { 00:32:22.951 "dma_device_id": "system", 00:32:22.951 "dma_device_type": 1 00:32:22.951 }, 00:32:22.951 { 00:32:22.951 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:22.951 "dma_device_type": 2 00:32:22.951 } 00:32:22.951 ], 00:32:22.951 "driver_specific": {} 00:32:22.951 } 00:32:22.951 ] 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:22.951 12:14:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:23.210 12:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:23.210 "name": "Existed_Raid", 00:32:23.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.210 "strip_size_kb": 64, 00:32:23.210 "state": "configuring", 00:32:23.210 "raid_level": "raid5f", 00:32:23.210 "superblock": false, 00:32:23.210 "num_base_bdevs": 4, 00:32:23.210 "num_base_bdevs_discovered": 1, 00:32:23.210 "num_base_bdevs_operational": 4, 00:32:23.210 "base_bdevs_list": [ 00:32:23.210 { 00:32:23.210 "name": "BaseBdev1", 00:32:23.210 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:23.210 "is_configured": true, 00:32:23.210 "data_offset": 0, 00:32:23.210 "data_size": 65536 00:32:23.210 }, 00:32:23.210 { 00:32:23.210 "name": "BaseBdev2", 00:32:23.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.210 "is_configured": false, 00:32:23.210 "data_offset": 0, 00:32:23.210 "data_size": 0 00:32:23.210 }, 00:32:23.210 { 00:32:23.210 "name": "BaseBdev3", 00:32:23.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.210 "is_configured": false, 00:32:23.210 "data_offset": 0, 00:32:23.210 "data_size": 0 00:32:23.210 }, 00:32:23.210 { 00:32:23.210 "name": "BaseBdev4", 00:32:23.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:23.210 "is_configured": false, 00:32:23.210 "data_offset": 0, 00:32:23.210 "data_size": 0 00:32:23.210 } 00:32:23.210 ] 00:32:23.210 }' 00:32:23.210 12:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:23.210 12:14:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:24.142 12:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:24.142 [2024-07-21 12:14:22.901707] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:24.142 [2024-07-21 12:14:22.901751] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:24.142 12:14:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:24.399 [2024-07-21 12:14:23.105786] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:24.399 [2024-07-21 12:14:23.107668] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:24.399 [2024-07-21 12:14:23.107742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:24.399 [2024-07-21 12:14:23.107755] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:24.399 [2024-07-21 12:14:23.107780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:24.399 [2024-07-21 12:14:23.107790] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:24.399 [2024-07-21 12:14:23.107807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:24.399 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:24.399 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:24.400 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:24.658 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:24.658 "name": "Existed_Raid", 00:32:24.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.658 "strip_size_kb": 64, 00:32:24.658 "state": "configuring", 00:32:24.658 "raid_level": "raid5f", 00:32:24.658 "superblock": false, 00:32:24.658 "num_base_bdevs": 4, 00:32:24.658 "num_base_bdevs_discovered": 1, 00:32:24.658 "num_base_bdevs_operational": 4, 00:32:24.658 "base_bdevs_list": [ 00:32:24.658 { 00:32:24.658 "name": "BaseBdev1", 00:32:24.658 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:24.658 "is_configured": true, 00:32:24.658 "data_offset": 0, 00:32:24.658 "data_size": 65536 00:32:24.658 }, 00:32:24.658 { 00:32:24.658 "name": "BaseBdev2", 00:32:24.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.658 "is_configured": false, 00:32:24.658 "data_offset": 0, 00:32:24.658 "data_size": 0 00:32:24.658 }, 00:32:24.658 { 00:32:24.658 "name": "BaseBdev3", 00:32:24.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.658 "is_configured": false, 00:32:24.658 "data_offset": 0, 00:32:24.658 "data_size": 0 00:32:24.658 }, 00:32:24.658 { 00:32:24.658 "name": "BaseBdev4", 00:32:24.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.658 "is_configured": false, 00:32:24.658 "data_offset": 0, 00:32:24.658 "data_size": 0 00:32:24.658 } 00:32:24.658 ] 00:32:24.658 }' 00:32:24.658 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:24.658 12:14:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:25.224 12:14:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:25.483 [2024-07-21 12:14:24.156210] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:25.483 BaseBdev2 00:32:25.483 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:25.483 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:25.483 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:25.483 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:25.483 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:25.483 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:25.483 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:25.741 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:25.741 [ 00:32:25.741 { 00:32:25.741 "name": "BaseBdev2", 00:32:25.741 "aliases": [ 00:32:25.741 "ea08e64e-305f-4e9a-91d8-af48eff5be47" 00:32:25.741 ], 00:32:25.741 "product_name": "Malloc disk", 00:32:25.741 "block_size": 512, 00:32:25.741 "num_blocks": 65536, 00:32:25.741 "uuid": "ea08e64e-305f-4e9a-91d8-af48eff5be47", 00:32:25.741 "assigned_rate_limits": { 00:32:25.741 "rw_ios_per_sec": 0, 00:32:25.741 "rw_mbytes_per_sec": 0, 00:32:25.741 "r_mbytes_per_sec": 0, 00:32:25.741 "w_mbytes_per_sec": 0 00:32:25.741 }, 00:32:25.741 "claimed": true, 00:32:25.741 "claim_type": "exclusive_write", 00:32:25.741 "zoned": false, 00:32:25.741 "supported_io_types": { 00:32:25.741 "read": true, 00:32:25.741 "write": true, 00:32:25.741 "unmap": true, 00:32:25.741 "write_zeroes": true, 00:32:25.741 "flush": true, 00:32:25.741 "reset": true, 00:32:25.741 "compare": false, 00:32:25.741 "compare_and_write": false, 00:32:25.741 "abort": true, 00:32:25.741 "nvme_admin": false, 00:32:25.741 "nvme_io": false 00:32:25.741 }, 00:32:25.741 "memory_domains": [ 00:32:25.741 { 00:32:25.741 "dma_device_id": "system", 00:32:25.741 "dma_device_type": 1 00:32:25.741 }, 00:32:25.741 { 00:32:25.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:25.741 "dma_device_type": 2 00:32:25.741 } 00:32:25.741 ], 00:32:25.741 "driver_specific": {} 00:32:25.741 } 00:32:25.741 ] 00:32:25.741 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:25.742 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:25.742 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:25.742 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:26.000 "name": "Existed_Raid", 00:32:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.000 "strip_size_kb": 64, 00:32:26.000 "state": "configuring", 00:32:26.000 "raid_level": "raid5f", 00:32:26.000 "superblock": false, 00:32:26.000 "num_base_bdevs": 4, 00:32:26.000 "num_base_bdevs_discovered": 2, 00:32:26.000 "num_base_bdevs_operational": 4, 00:32:26.000 "base_bdevs_list": [ 00:32:26.000 { 00:32:26.000 "name": "BaseBdev1", 00:32:26.000 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:26.000 "is_configured": true, 00:32:26.000 "data_offset": 0, 00:32:26.000 "data_size": 65536 00:32:26.000 }, 00:32:26.000 { 00:32:26.000 "name": "BaseBdev2", 00:32:26.000 "uuid": "ea08e64e-305f-4e9a-91d8-af48eff5be47", 00:32:26.000 "is_configured": true, 00:32:26.000 "data_offset": 0, 00:32:26.000 "data_size": 65536 00:32:26.000 }, 00:32:26.000 { 00:32:26.000 "name": "BaseBdev3", 00:32:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.000 "is_configured": false, 00:32:26.000 "data_offset": 0, 00:32:26.000 "data_size": 0 00:32:26.000 }, 00:32:26.000 { 00:32:26.000 "name": "BaseBdev4", 00:32:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.000 "is_configured": false, 00:32:26.000 "data_offset": 0, 00:32:26.000 "data_size": 0 00:32:26.000 } 00:32:26.000 ] 00:32:26.000 }' 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:26.000 12:14:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:26.566 12:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:27.129 [2024-07-21 12:14:25.708063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:27.129 BaseBdev3 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:27.129 12:14:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:27.387 [ 00:32:27.387 { 00:32:27.387 "name": "BaseBdev3", 00:32:27.387 "aliases": [ 00:32:27.387 "0a5ac1a6-676d-4d08-bf60-145b45688d56" 00:32:27.387 ], 00:32:27.387 "product_name": "Malloc disk", 00:32:27.387 "block_size": 512, 00:32:27.387 "num_blocks": 65536, 00:32:27.387 "uuid": "0a5ac1a6-676d-4d08-bf60-145b45688d56", 00:32:27.387 "assigned_rate_limits": { 00:32:27.387 "rw_ios_per_sec": 0, 00:32:27.387 "rw_mbytes_per_sec": 0, 00:32:27.387 "r_mbytes_per_sec": 0, 00:32:27.387 "w_mbytes_per_sec": 0 00:32:27.387 }, 00:32:27.387 "claimed": true, 00:32:27.387 "claim_type": "exclusive_write", 00:32:27.387 "zoned": false, 00:32:27.387 "supported_io_types": { 00:32:27.387 "read": true, 00:32:27.387 "write": true, 00:32:27.387 "unmap": true, 00:32:27.387 "write_zeroes": true, 00:32:27.387 "flush": true, 00:32:27.387 "reset": true, 00:32:27.387 "compare": false, 00:32:27.387 "compare_and_write": false, 00:32:27.387 "abort": true, 00:32:27.387 "nvme_admin": false, 00:32:27.387 "nvme_io": false 00:32:27.387 }, 00:32:27.387 "memory_domains": [ 00:32:27.387 { 00:32:27.387 "dma_device_id": "system", 00:32:27.387 "dma_device_type": 1 00:32:27.387 }, 00:32:27.387 { 00:32:27.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:27.387 "dma_device_type": 2 00:32:27.387 } 00:32:27.387 ], 00:32:27.387 "driver_specific": {} 00:32:27.387 } 00:32:27.387 ] 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:27.387 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:27.645 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:27.645 "name": "Existed_Raid", 00:32:27.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.645 "strip_size_kb": 64, 00:32:27.645 "state": "configuring", 00:32:27.645 "raid_level": "raid5f", 00:32:27.645 "superblock": false, 00:32:27.645 "num_base_bdevs": 4, 00:32:27.645 "num_base_bdevs_discovered": 3, 00:32:27.645 "num_base_bdevs_operational": 4, 00:32:27.645 "base_bdevs_list": [ 00:32:27.645 { 00:32:27.645 "name": "BaseBdev1", 00:32:27.645 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:27.645 "is_configured": true, 00:32:27.645 "data_offset": 0, 00:32:27.645 "data_size": 65536 00:32:27.645 }, 00:32:27.645 { 00:32:27.645 "name": "BaseBdev2", 00:32:27.645 "uuid": "ea08e64e-305f-4e9a-91d8-af48eff5be47", 00:32:27.645 "is_configured": true, 00:32:27.645 "data_offset": 0, 00:32:27.645 "data_size": 65536 00:32:27.645 }, 00:32:27.645 { 00:32:27.645 "name": "BaseBdev3", 00:32:27.645 "uuid": "0a5ac1a6-676d-4d08-bf60-145b45688d56", 00:32:27.645 "is_configured": true, 00:32:27.645 "data_offset": 0, 00:32:27.645 "data_size": 65536 00:32:27.645 }, 00:32:27.645 { 00:32:27.645 "name": "BaseBdev4", 00:32:27.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.645 "is_configured": false, 00:32:27.645 "data_offset": 0, 00:32:27.645 "data_size": 0 00:32:27.645 } 00:32:27.645 ] 00:32:27.645 }' 00:32:27.645 12:14:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:27.645 12:14:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:28.211 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:28.470 [2024-07-21 12:14:27.256036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:28.470 [2024-07-21 12:14:27.256332] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:32:28.470 [2024-07-21 12:14:27.256376] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:28.470 [2024-07-21 12:14:27.256600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:32:28.470 [2024-07-21 12:14:27.257560] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:32:28.470 [2024-07-21 12:14:27.257697] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:32:28.470 [2024-07-21 12:14:27.258040] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:28.470 BaseBdev4 00:32:28.470 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:32:28.470 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:28.470 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:28.470 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:28.470 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:28.470 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:28.470 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:28.727 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:28.985 [ 00:32:28.985 { 00:32:28.985 "name": "BaseBdev4", 00:32:28.985 "aliases": [ 00:32:28.985 "c338dbb4-267b-4757-8280-37e750db11ca" 00:32:28.985 ], 00:32:28.985 "product_name": "Malloc disk", 00:32:28.985 "block_size": 512, 00:32:28.985 "num_blocks": 65536, 00:32:28.985 "uuid": "c338dbb4-267b-4757-8280-37e750db11ca", 00:32:28.985 "assigned_rate_limits": { 00:32:28.985 "rw_ios_per_sec": 0, 00:32:28.985 "rw_mbytes_per_sec": 0, 00:32:28.985 "r_mbytes_per_sec": 0, 00:32:28.985 "w_mbytes_per_sec": 0 00:32:28.985 }, 00:32:28.985 "claimed": true, 00:32:28.985 "claim_type": "exclusive_write", 00:32:28.985 "zoned": false, 00:32:28.985 "supported_io_types": { 00:32:28.985 "read": true, 00:32:28.985 "write": true, 00:32:28.985 "unmap": true, 00:32:28.985 "write_zeroes": true, 00:32:28.985 "flush": true, 00:32:28.985 "reset": true, 00:32:28.985 "compare": false, 00:32:28.985 "compare_and_write": false, 00:32:28.985 "abort": true, 00:32:28.985 "nvme_admin": false, 00:32:28.985 "nvme_io": false 00:32:28.986 }, 00:32:28.986 "memory_domains": [ 00:32:28.986 { 00:32:28.986 "dma_device_id": "system", 00:32:28.986 "dma_device_type": 1 00:32:28.986 }, 00:32:28.986 { 00:32:28.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:28.986 "dma_device_type": 2 00:32:28.986 } 00:32:28.986 ], 00:32:28.986 "driver_specific": {} 00:32:28.986 } 00:32:28.986 ] 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:28.986 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:29.243 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:29.243 "name": "Existed_Raid", 00:32:29.243 "uuid": "854471bf-b4f1-4fa7-8f1c-dc3124c5c9ee", 00:32:29.244 "strip_size_kb": 64, 00:32:29.244 "state": "online", 00:32:29.244 "raid_level": "raid5f", 00:32:29.244 "superblock": false, 00:32:29.244 "num_base_bdevs": 4, 00:32:29.244 "num_base_bdevs_discovered": 4, 00:32:29.244 "num_base_bdevs_operational": 4, 00:32:29.244 "base_bdevs_list": [ 00:32:29.244 { 00:32:29.244 "name": "BaseBdev1", 00:32:29.244 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:29.244 "is_configured": true, 00:32:29.244 "data_offset": 0, 00:32:29.244 "data_size": 65536 00:32:29.244 }, 00:32:29.244 { 00:32:29.244 "name": "BaseBdev2", 00:32:29.244 "uuid": "ea08e64e-305f-4e9a-91d8-af48eff5be47", 00:32:29.244 "is_configured": true, 00:32:29.244 "data_offset": 0, 00:32:29.244 "data_size": 65536 00:32:29.244 }, 00:32:29.244 { 00:32:29.244 "name": "BaseBdev3", 00:32:29.244 "uuid": "0a5ac1a6-676d-4d08-bf60-145b45688d56", 00:32:29.244 "is_configured": true, 00:32:29.244 "data_offset": 0, 00:32:29.244 "data_size": 65536 00:32:29.244 }, 00:32:29.244 { 00:32:29.244 "name": "BaseBdev4", 00:32:29.244 "uuid": "c338dbb4-267b-4757-8280-37e750db11ca", 00:32:29.244 "is_configured": true, 00:32:29.244 "data_offset": 0, 00:32:29.244 "data_size": 65536 00:32:29.244 } 00:32:29.244 ] 00:32:29.244 }' 00:32:29.244 12:14:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:29.244 12:14:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:29.810 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:30.068 [2024-07-21 12:14:28.725263] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:30.068 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:30.068 "name": "Existed_Raid", 00:32:30.068 "aliases": [ 00:32:30.068 "854471bf-b4f1-4fa7-8f1c-dc3124c5c9ee" 00:32:30.068 ], 00:32:30.068 "product_name": "Raid Volume", 00:32:30.068 "block_size": 512, 00:32:30.068 "num_blocks": 196608, 00:32:30.068 "uuid": "854471bf-b4f1-4fa7-8f1c-dc3124c5c9ee", 00:32:30.068 "assigned_rate_limits": { 00:32:30.068 "rw_ios_per_sec": 0, 00:32:30.068 "rw_mbytes_per_sec": 0, 00:32:30.068 "r_mbytes_per_sec": 0, 00:32:30.068 "w_mbytes_per_sec": 0 00:32:30.068 }, 00:32:30.068 "claimed": false, 00:32:30.068 "zoned": false, 00:32:30.068 "supported_io_types": { 00:32:30.068 "read": true, 00:32:30.068 "write": true, 00:32:30.068 "unmap": false, 00:32:30.068 "write_zeroes": true, 00:32:30.068 "flush": false, 00:32:30.068 "reset": true, 00:32:30.068 "compare": false, 00:32:30.068 "compare_and_write": false, 00:32:30.068 "abort": false, 00:32:30.068 "nvme_admin": false, 00:32:30.068 "nvme_io": false 00:32:30.068 }, 00:32:30.068 "driver_specific": { 00:32:30.068 "raid": { 00:32:30.068 "uuid": "854471bf-b4f1-4fa7-8f1c-dc3124c5c9ee", 00:32:30.068 "strip_size_kb": 64, 00:32:30.068 "state": "online", 00:32:30.068 "raid_level": "raid5f", 00:32:30.068 "superblock": false, 00:32:30.068 "num_base_bdevs": 4, 00:32:30.068 "num_base_bdevs_discovered": 4, 00:32:30.068 "num_base_bdevs_operational": 4, 00:32:30.068 "base_bdevs_list": [ 00:32:30.069 { 00:32:30.069 "name": "BaseBdev1", 00:32:30.069 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:30.069 "is_configured": true, 00:32:30.069 "data_offset": 0, 00:32:30.069 "data_size": 65536 00:32:30.069 }, 00:32:30.069 { 00:32:30.069 "name": "BaseBdev2", 00:32:30.069 "uuid": "ea08e64e-305f-4e9a-91d8-af48eff5be47", 00:32:30.069 "is_configured": true, 00:32:30.069 "data_offset": 0, 00:32:30.069 "data_size": 65536 00:32:30.069 }, 00:32:30.069 { 00:32:30.069 "name": "BaseBdev3", 00:32:30.069 "uuid": "0a5ac1a6-676d-4d08-bf60-145b45688d56", 00:32:30.069 "is_configured": true, 00:32:30.069 "data_offset": 0, 00:32:30.069 "data_size": 65536 00:32:30.069 }, 00:32:30.069 { 00:32:30.069 "name": "BaseBdev4", 00:32:30.069 "uuid": "c338dbb4-267b-4757-8280-37e750db11ca", 00:32:30.069 "is_configured": true, 00:32:30.069 "data_offset": 0, 00:32:30.069 "data_size": 65536 00:32:30.069 } 00:32:30.069 ] 00:32:30.069 } 00:32:30.069 } 00:32:30.069 }' 00:32:30.069 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:30.069 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:32:30.069 BaseBdev2 00:32:30.069 BaseBdev3 00:32:30.069 BaseBdev4' 00:32:30.069 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:30.069 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:30.069 12:14:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:32:30.327 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:30.327 "name": "BaseBdev1", 00:32:30.327 "aliases": [ 00:32:30.327 "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf" 00:32:30.327 ], 00:32:30.327 "product_name": "Malloc disk", 00:32:30.327 "block_size": 512, 00:32:30.327 "num_blocks": 65536, 00:32:30.327 "uuid": "e1d72a72-d84d-4a7d-97b7-8ceb2c9d1fdf", 00:32:30.327 "assigned_rate_limits": { 00:32:30.327 "rw_ios_per_sec": 0, 00:32:30.327 "rw_mbytes_per_sec": 0, 00:32:30.327 "r_mbytes_per_sec": 0, 00:32:30.327 "w_mbytes_per_sec": 0 00:32:30.327 }, 00:32:30.327 "claimed": true, 00:32:30.327 "claim_type": "exclusive_write", 00:32:30.327 "zoned": false, 00:32:30.327 "supported_io_types": { 00:32:30.327 "read": true, 00:32:30.327 "write": true, 00:32:30.327 "unmap": true, 00:32:30.327 "write_zeroes": true, 00:32:30.327 "flush": true, 00:32:30.327 "reset": true, 00:32:30.327 "compare": false, 00:32:30.327 "compare_and_write": false, 00:32:30.327 "abort": true, 00:32:30.327 "nvme_admin": false, 00:32:30.327 "nvme_io": false 00:32:30.327 }, 00:32:30.327 "memory_domains": [ 00:32:30.327 { 00:32:30.327 "dma_device_id": "system", 00:32:30.327 "dma_device_type": 1 00:32:30.327 }, 00:32:30.327 { 00:32:30.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.327 "dma_device_type": 2 00:32:30.327 } 00:32:30.327 ], 00:32:30.327 "driver_specific": {} 00:32:30.327 }' 00:32:30.327 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:30.327 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:30.327 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:30.327 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:30.327 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:30.585 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:30.843 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:30.843 "name": "BaseBdev2", 00:32:30.843 "aliases": [ 00:32:30.843 "ea08e64e-305f-4e9a-91d8-af48eff5be47" 00:32:30.843 ], 00:32:30.843 "product_name": "Malloc disk", 00:32:30.843 "block_size": 512, 00:32:30.843 "num_blocks": 65536, 00:32:30.843 "uuid": "ea08e64e-305f-4e9a-91d8-af48eff5be47", 00:32:30.843 "assigned_rate_limits": { 00:32:30.843 "rw_ios_per_sec": 0, 00:32:30.843 "rw_mbytes_per_sec": 0, 00:32:30.843 "r_mbytes_per_sec": 0, 00:32:30.843 "w_mbytes_per_sec": 0 00:32:30.843 }, 00:32:30.843 "claimed": true, 00:32:30.843 "claim_type": "exclusive_write", 00:32:30.843 "zoned": false, 00:32:30.843 "supported_io_types": { 00:32:30.843 "read": true, 00:32:30.843 "write": true, 00:32:30.843 "unmap": true, 00:32:30.843 "write_zeroes": true, 00:32:30.843 "flush": true, 00:32:30.843 "reset": true, 00:32:30.843 "compare": false, 00:32:30.843 "compare_and_write": false, 00:32:30.843 "abort": true, 00:32:30.843 "nvme_admin": false, 00:32:30.843 "nvme_io": false 00:32:30.843 }, 00:32:30.843 "memory_domains": [ 00:32:30.843 { 00:32:30.843 "dma_device_id": "system", 00:32:30.843 "dma_device_type": 1 00:32:30.843 }, 00:32:30.843 { 00:32:30.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:30.843 "dma_device_type": 2 00:32:30.843 } 00:32:30.843 ], 00:32:30.843 "driver_specific": {} 00:32:30.843 }' 00:32:30.843 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:31.101 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.359 12:14:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.359 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:31.359 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:31.359 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:31.359 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:31.618 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:31.618 "name": "BaseBdev3", 00:32:31.618 "aliases": [ 00:32:31.618 "0a5ac1a6-676d-4d08-bf60-145b45688d56" 00:32:31.618 ], 00:32:31.618 "product_name": "Malloc disk", 00:32:31.618 "block_size": 512, 00:32:31.618 "num_blocks": 65536, 00:32:31.618 "uuid": "0a5ac1a6-676d-4d08-bf60-145b45688d56", 00:32:31.618 "assigned_rate_limits": { 00:32:31.618 "rw_ios_per_sec": 0, 00:32:31.618 "rw_mbytes_per_sec": 0, 00:32:31.618 "r_mbytes_per_sec": 0, 00:32:31.618 "w_mbytes_per_sec": 0 00:32:31.618 }, 00:32:31.618 "claimed": true, 00:32:31.618 "claim_type": "exclusive_write", 00:32:31.618 "zoned": false, 00:32:31.618 "supported_io_types": { 00:32:31.618 "read": true, 00:32:31.618 "write": true, 00:32:31.618 "unmap": true, 00:32:31.618 "write_zeroes": true, 00:32:31.618 "flush": true, 00:32:31.618 "reset": true, 00:32:31.618 "compare": false, 00:32:31.618 "compare_and_write": false, 00:32:31.618 "abort": true, 00:32:31.618 "nvme_admin": false, 00:32:31.618 "nvme_io": false 00:32:31.618 }, 00:32:31.618 "memory_domains": [ 00:32:31.618 { 00:32:31.618 "dma_device_id": "system", 00:32:31.618 "dma_device_type": 1 00:32:31.618 }, 00:32:31.618 { 00:32:31.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:31.618 "dma_device_type": 2 00:32:31.618 } 00:32:31.618 ], 00:32:31.618 "driver_specific": {} 00:32:31.618 }' 00:32:31.618 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.618 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:31.618 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:31.618 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.618 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:31.618 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:31.876 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:32.135 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:32.135 "name": "BaseBdev4", 00:32:32.135 "aliases": [ 00:32:32.135 "c338dbb4-267b-4757-8280-37e750db11ca" 00:32:32.135 ], 00:32:32.135 "product_name": "Malloc disk", 00:32:32.135 "block_size": 512, 00:32:32.135 "num_blocks": 65536, 00:32:32.135 "uuid": "c338dbb4-267b-4757-8280-37e750db11ca", 00:32:32.135 "assigned_rate_limits": { 00:32:32.135 "rw_ios_per_sec": 0, 00:32:32.135 "rw_mbytes_per_sec": 0, 00:32:32.135 "r_mbytes_per_sec": 0, 00:32:32.135 "w_mbytes_per_sec": 0 00:32:32.135 }, 00:32:32.135 "claimed": true, 00:32:32.135 "claim_type": "exclusive_write", 00:32:32.135 "zoned": false, 00:32:32.135 "supported_io_types": { 00:32:32.135 "read": true, 00:32:32.135 "write": true, 00:32:32.135 "unmap": true, 00:32:32.135 "write_zeroes": true, 00:32:32.135 "flush": true, 00:32:32.135 "reset": true, 00:32:32.135 "compare": false, 00:32:32.135 "compare_and_write": false, 00:32:32.135 "abort": true, 00:32:32.135 "nvme_admin": false, 00:32:32.135 "nvme_io": false 00:32:32.135 }, 00:32:32.135 "memory_domains": [ 00:32:32.135 { 00:32:32.135 "dma_device_id": "system", 00:32:32.135 "dma_device_type": 1 00:32:32.135 }, 00:32:32.135 { 00:32:32.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:32.135 "dma_device_type": 2 00:32:32.135 } 00:32:32.135 ], 00:32:32.135 "driver_specific": {} 00:32:32.135 }' 00:32:32.135 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:32.135 12:14:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:32.393 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:32.652 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:32.652 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:32.652 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:32.910 [2024-07-21 12:14:31.553879] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:32.910 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:32.911 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:32.911 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:32.911 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:32.911 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:33.169 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:33.169 "name": "Existed_Raid", 00:32:33.169 "uuid": "854471bf-b4f1-4fa7-8f1c-dc3124c5c9ee", 00:32:33.169 "strip_size_kb": 64, 00:32:33.169 "state": "online", 00:32:33.169 "raid_level": "raid5f", 00:32:33.169 "superblock": false, 00:32:33.169 "num_base_bdevs": 4, 00:32:33.169 "num_base_bdevs_discovered": 3, 00:32:33.169 "num_base_bdevs_operational": 3, 00:32:33.169 "base_bdevs_list": [ 00:32:33.169 { 00:32:33.169 "name": null, 00:32:33.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:33.169 "is_configured": false, 00:32:33.169 "data_offset": 0, 00:32:33.169 "data_size": 65536 00:32:33.169 }, 00:32:33.169 { 00:32:33.169 "name": "BaseBdev2", 00:32:33.169 "uuid": "ea08e64e-305f-4e9a-91d8-af48eff5be47", 00:32:33.169 "is_configured": true, 00:32:33.169 "data_offset": 0, 00:32:33.169 "data_size": 65536 00:32:33.169 }, 00:32:33.169 { 00:32:33.169 "name": "BaseBdev3", 00:32:33.169 "uuid": "0a5ac1a6-676d-4d08-bf60-145b45688d56", 00:32:33.169 "is_configured": true, 00:32:33.169 "data_offset": 0, 00:32:33.169 "data_size": 65536 00:32:33.169 }, 00:32:33.169 { 00:32:33.169 "name": "BaseBdev4", 00:32:33.169 "uuid": "c338dbb4-267b-4757-8280-37e750db11ca", 00:32:33.169 "is_configured": true, 00:32:33.169 "data_offset": 0, 00:32:33.169 "data_size": 65536 00:32:33.169 } 00:32:33.169 ] 00:32:33.169 }' 00:32:33.169 12:14:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:33.169 12:14:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:33.734 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:32:33.734 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:33.734 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:33.734 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:33.992 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:33.992 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:33.992 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:32:34.251 [2024-07-21 12:14:32.882944] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:34.251 [2024-07-21 12:14:32.883216] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:34.251 [2024-07-21 12:14:32.892604] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:34.251 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:34.251 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:34.251 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.251 12:14:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:34.508 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:34.508 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:34.508 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:32:34.508 [2024-07-21 12:14:33.340722] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:34.508 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:34.508 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:34.508 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:34.508 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:32:34.766 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:32:34.766 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:32:34.766 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:32:35.023 [2024-07-21 12:14:33.742458] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:32:35.023 [2024-07-21 12:14:33.742673] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:32:35.023 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:32:35.023 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:32:35.023 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:35.023 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:32:35.281 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:32:35.281 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:32:35.281 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:32:35.281 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:32:35.281 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:35.281 12:14:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:35.539 BaseBdev2 00:32:35.539 12:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:32:35.539 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:35.539 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:35.539 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:35.539 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:35.539 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:35.539 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:35.797 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:35.797 [ 00:32:35.797 { 00:32:35.797 "name": "BaseBdev2", 00:32:35.797 "aliases": [ 00:32:35.797 "0173b7cb-1b3a-4be4-a365-792c891fdb3f" 00:32:35.797 ], 00:32:35.797 "product_name": "Malloc disk", 00:32:35.797 "block_size": 512, 00:32:35.797 "num_blocks": 65536, 00:32:35.797 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:35.797 "assigned_rate_limits": { 00:32:35.797 "rw_ios_per_sec": 0, 00:32:35.797 "rw_mbytes_per_sec": 0, 00:32:35.797 "r_mbytes_per_sec": 0, 00:32:35.797 "w_mbytes_per_sec": 0 00:32:35.797 }, 00:32:35.797 "claimed": false, 00:32:35.797 "zoned": false, 00:32:35.797 "supported_io_types": { 00:32:35.797 "read": true, 00:32:35.797 "write": true, 00:32:35.797 "unmap": true, 00:32:35.797 "write_zeroes": true, 00:32:35.797 "flush": true, 00:32:35.797 "reset": true, 00:32:35.797 "compare": false, 00:32:35.797 "compare_and_write": false, 00:32:35.797 "abort": true, 00:32:35.797 "nvme_admin": false, 00:32:35.797 "nvme_io": false 00:32:35.797 }, 00:32:35.797 "memory_domains": [ 00:32:35.797 { 00:32:35.797 "dma_device_id": "system", 00:32:35.797 "dma_device_type": 1 00:32:35.797 }, 00:32:35.797 { 00:32:35.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:35.797 "dma_device_type": 2 00:32:35.797 } 00:32:35.797 ], 00:32:35.797 "driver_specific": {} 00:32:35.797 } 00:32:35.797 ] 00:32:35.797 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:35.797 12:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:35.797 12:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:35.797 12:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:36.055 BaseBdev3 00:32:36.055 12:14:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:32:36.055 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:36.055 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:36.055 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:36.055 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:36.055 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:36.055 12:14:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:36.313 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:36.571 [ 00:32:36.571 { 00:32:36.571 "name": "BaseBdev3", 00:32:36.571 "aliases": [ 00:32:36.571 "bc36ad28-93b5-4e3e-b96e-33e18de1c71d" 00:32:36.571 ], 00:32:36.571 "product_name": "Malloc disk", 00:32:36.571 "block_size": 512, 00:32:36.571 "num_blocks": 65536, 00:32:36.571 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:36.571 "assigned_rate_limits": { 00:32:36.571 "rw_ios_per_sec": 0, 00:32:36.571 "rw_mbytes_per_sec": 0, 00:32:36.571 "r_mbytes_per_sec": 0, 00:32:36.571 "w_mbytes_per_sec": 0 00:32:36.571 }, 00:32:36.571 "claimed": false, 00:32:36.571 "zoned": false, 00:32:36.571 "supported_io_types": { 00:32:36.571 "read": true, 00:32:36.571 "write": true, 00:32:36.571 "unmap": true, 00:32:36.571 "write_zeroes": true, 00:32:36.571 "flush": true, 00:32:36.571 "reset": true, 00:32:36.571 "compare": false, 00:32:36.571 "compare_and_write": false, 00:32:36.571 "abort": true, 00:32:36.571 "nvme_admin": false, 00:32:36.571 "nvme_io": false 00:32:36.571 }, 00:32:36.571 "memory_domains": [ 00:32:36.571 { 00:32:36.571 "dma_device_id": "system", 00:32:36.571 "dma_device_type": 1 00:32:36.571 }, 00:32:36.571 { 00:32:36.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:36.571 "dma_device_type": 2 00:32:36.571 } 00:32:36.571 ], 00:32:36.571 "driver_specific": {} 00:32:36.571 } 00:32:36.571 ] 00:32:36.571 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:36.571 12:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:36.571 12:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:36.571 12:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:36.830 BaseBdev4 00:32:36.830 12:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:32:36.830 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:36.830 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:36.830 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:36.830 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:36.830 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:36.830 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:37.088 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:32:37.088 [ 00:32:37.088 { 00:32:37.088 "name": "BaseBdev4", 00:32:37.088 "aliases": [ 00:32:37.088 "767d1033-8dc3-4841-9dc5-f950e636aef5" 00:32:37.088 ], 00:32:37.088 "product_name": "Malloc disk", 00:32:37.088 "block_size": 512, 00:32:37.088 "num_blocks": 65536, 00:32:37.088 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:37.088 "assigned_rate_limits": { 00:32:37.088 "rw_ios_per_sec": 0, 00:32:37.088 "rw_mbytes_per_sec": 0, 00:32:37.088 "r_mbytes_per_sec": 0, 00:32:37.088 "w_mbytes_per_sec": 0 00:32:37.089 }, 00:32:37.089 "claimed": false, 00:32:37.089 "zoned": false, 00:32:37.089 "supported_io_types": { 00:32:37.089 "read": true, 00:32:37.089 "write": true, 00:32:37.089 "unmap": true, 00:32:37.089 "write_zeroes": true, 00:32:37.089 "flush": true, 00:32:37.089 "reset": true, 00:32:37.089 "compare": false, 00:32:37.089 "compare_and_write": false, 00:32:37.089 "abort": true, 00:32:37.089 "nvme_admin": false, 00:32:37.089 "nvme_io": false 00:32:37.089 }, 00:32:37.089 "memory_domains": [ 00:32:37.089 { 00:32:37.089 "dma_device_id": "system", 00:32:37.089 "dma_device_type": 1 00:32:37.089 }, 00:32:37.089 { 00:32:37.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:37.089 "dma_device_type": 2 00:32:37.089 } 00:32:37.089 ], 00:32:37.089 "driver_specific": {} 00:32:37.089 } 00:32:37.089 ] 00:32:37.347 12:14:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:37.347 12:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:32:37.347 12:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:32:37.347 12:14:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:37.347 [2024-07-21 12:14:36.134159] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:37.347 [2024-07-21 12:14:36.134350] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:37.347 [2024-07-21 12:14:36.134461] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:37.347 [2024-07-21 12:14:36.136406] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:37.347 [2024-07-21 12:14:36.136583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:37.347 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:37.606 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:37.606 "name": "Existed_Raid", 00:32:37.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.606 "strip_size_kb": 64, 00:32:37.606 "state": "configuring", 00:32:37.606 "raid_level": "raid5f", 00:32:37.606 "superblock": false, 00:32:37.606 "num_base_bdevs": 4, 00:32:37.606 "num_base_bdevs_discovered": 3, 00:32:37.606 "num_base_bdevs_operational": 4, 00:32:37.606 "base_bdevs_list": [ 00:32:37.606 { 00:32:37.606 "name": "BaseBdev1", 00:32:37.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:37.606 "is_configured": false, 00:32:37.606 "data_offset": 0, 00:32:37.606 "data_size": 0 00:32:37.606 }, 00:32:37.606 { 00:32:37.606 "name": "BaseBdev2", 00:32:37.606 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:37.606 "is_configured": true, 00:32:37.606 "data_offset": 0, 00:32:37.606 "data_size": 65536 00:32:37.606 }, 00:32:37.606 { 00:32:37.606 "name": "BaseBdev3", 00:32:37.606 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:37.606 "is_configured": true, 00:32:37.606 "data_offset": 0, 00:32:37.606 "data_size": 65536 00:32:37.606 }, 00:32:37.606 { 00:32:37.606 "name": "BaseBdev4", 00:32:37.606 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:37.606 "is_configured": true, 00:32:37.606 "data_offset": 0, 00:32:37.606 "data_size": 65536 00:32:37.606 } 00:32:37.606 ] 00:32:37.606 }' 00:32:37.606 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:37.606 12:14:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:38.173 12:14:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:32:38.431 [2024-07-21 12:14:37.178375] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:38.431 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:38.431 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:38.431 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:38.431 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:38.432 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:38.708 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:38.708 "name": "Existed_Raid", 00:32:38.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.708 "strip_size_kb": 64, 00:32:38.708 "state": "configuring", 00:32:38.708 "raid_level": "raid5f", 00:32:38.708 "superblock": false, 00:32:38.708 "num_base_bdevs": 4, 00:32:38.708 "num_base_bdevs_discovered": 2, 00:32:38.708 "num_base_bdevs_operational": 4, 00:32:38.708 "base_bdevs_list": [ 00:32:38.708 { 00:32:38.708 "name": "BaseBdev1", 00:32:38.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.708 "is_configured": false, 00:32:38.708 "data_offset": 0, 00:32:38.708 "data_size": 0 00:32:38.708 }, 00:32:38.708 { 00:32:38.708 "name": null, 00:32:38.708 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:38.708 "is_configured": false, 00:32:38.708 "data_offset": 0, 00:32:38.708 "data_size": 65536 00:32:38.708 }, 00:32:38.708 { 00:32:38.708 "name": "BaseBdev3", 00:32:38.708 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:38.708 "is_configured": true, 00:32:38.708 "data_offset": 0, 00:32:38.708 "data_size": 65536 00:32:38.708 }, 00:32:38.708 { 00:32:38.708 "name": "BaseBdev4", 00:32:38.708 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:38.708 "is_configured": true, 00:32:38.708 "data_offset": 0, 00:32:38.708 "data_size": 65536 00:32:38.708 } 00:32:38.708 ] 00:32:38.708 }' 00:32:38.708 12:14:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:38.708 12:14:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.275 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:39.275 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:39.534 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:32:39.534 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:39.814 [2024-07-21 12:14:38.518935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:39.814 BaseBdev1 00:32:39.814 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:32:39.814 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:39.814 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:39.814 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:39.814 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:39.814 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:39.814 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:40.079 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:40.338 [ 00:32:40.338 { 00:32:40.338 "name": "BaseBdev1", 00:32:40.338 "aliases": [ 00:32:40.338 "45dbc928-2857-4800-9a46-4a16d139213a" 00:32:40.338 ], 00:32:40.338 "product_name": "Malloc disk", 00:32:40.338 "block_size": 512, 00:32:40.338 "num_blocks": 65536, 00:32:40.338 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:40.338 "assigned_rate_limits": { 00:32:40.338 "rw_ios_per_sec": 0, 00:32:40.338 "rw_mbytes_per_sec": 0, 00:32:40.338 "r_mbytes_per_sec": 0, 00:32:40.338 "w_mbytes_per_sec": 0 00:32:40.338 }, 00:32:40.338 "claimed": true, 00:32:40.338 "claim_type": "exclusive_write", 00:32:40.338 "zoned": false, 00:32:40.338 "supported_io_types": { 00:32:40.338 "read": true, 00:32:40.338 "write": true, 00:32:40.338 "unmap": true, 00:32:40.338 "write_zeroes": true, 00:32:40.338 "flush": true, 00:32:40.338 "reset": true, 00:32:40.338 "compare": false, 00:32:40.338 "compare_and_write": false, 00:32:40.338 "abort": true, 00:32:40.338 "nvme_admin": false, 00:32:40.338 "nvme_io": false 00:32:40.338 }, 00:32:40.338 "memory_domains": [ 00:32:40.338 { 00:32:40.338 "dma_device_id": "system", 00:32:40.338 "dma_device_type": 1 00:32:40.338 }, 00:32:40.338 { 00:32:40.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:40.338 "dma_device_type": 2 00:32:40.338 } 00:32:40.338 ], 00:32:40.338 "driver_specific": {} 00:32:40.338 } 00:32:40.338 ] 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.338 12:14:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:40.338 12:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:40.338 "name": "Existed_Raid", 00:32:40.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.338 "strip_size_kb": 64, 00:32:40.338 "state": "configuring", 00:32:40.338 "raid_level": "raid5f", 00:32:40.339 "superblock": false, 00:32:40.339 "num_base_bdevs": 4, 00:32:40.339 "num_base_bdevs_discovered": 3, 00:32:40.339 "num_base_bdevs_operational": 4, 00:32:40.339 "base_bdevs_list": [ 00:32:40.339 { 00:32:40.339 "name": "BaseBdev1", 00:32:40.339 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:40.339 "is_configured": true, 00:32:40.339 "data_offset": 0, 00:32:40.339 "data_size": 65536 00:32:40.339 }, 00:32:40.339 { 00:32:40.339 "name": null, 00:32:40.339 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:40.339 "is_configured": false, 00:32:40.339 "data_offset": 0, 00:32:40.339 "data_size": 65536 00:32:40.339 }, 00:32:40.339 { 00:32:40.339 "name": "BaseBdev3", 00:32:40.339 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:40.339 "is_configured": true, 00:32:40.339 "data_offset": 0, 00:32:40.339 "data_size": 65536 00:32:40.339 }, 00:32:40.339 { 00:32:40.339 "name": "BaseBdev4", 00:32:40.339 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:40.339 "is_configured": true, 00:32:40.339 "data_offset": 0, 00:32:40.339 "data_size": 65536 00:32:40.339 } 00:32:40.339 ] 00:32:40.339 }' 00:32:40.339 12:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:40.339 12:14:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.906 12:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:40.906 12:14:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:41.165 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:32:41.165 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:32:41.423 [2024-07-21 12:14:40.227835] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:41.423 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:41.682 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:41.682 "name": "Existed_Raid", 00:32:41.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:41.682 "strip_size_kb": 64, 00:32:41.682 "state": "configuring", 00:32:41.682 "raid_level": "raid5f", 00:32:41.682 "superblock": false, 00:32:41.682 "num_base_bdevs": 4, 00:32:41.682 "num_base_bdevs_discovered": 2, 00:32:41.682 "num_base_bdevs_operational": 4, 00:32:41.682 "base_bdevs_list": [ 00:32:41.682 { 00:32:41.682 "name": "BaseBdev1", 00:32:41.682 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:41.682 "is_configured": true, 00:32:41.682 "data_offset": 0, 00:32:41.682 "data_size": 65536 00:32:41.682 }, 00:32:41.682 { 00:32:41.682 "name": null, 00:32:41.682 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:41.682 "is_configured": false, 00:32:41.682 "data_offset": 0, 00:32:41.682 "data_size": 65536 00:32:41.682 }, 00:32:41.682 { 00:32:41.682 "name": null, 00:32:41.682 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:41.682 "is_configured": false, 00:32:41.682 "data_offset": 0, 00:32:41.682 "data_size": 65536 00:32:41.682 }, 00:32:41.682 { 00:32:41.682 "name": "BaseBdev4", 00:32:41.682 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:41.682 "is_configured": true, 00:32:41.682 "data_offset": 0, 00:32:41.682 "data_size": 65536 00:32:41.682 } 00:32:41.682 ] 00:32:41.682 }' 00:32:41.682 12:14:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:41.682 12:14:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.249 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.249 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:32:42.815 [2024-07-21 12:14:41.552129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:42.815 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:43.073 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:43.073 "name": "Existed_Raid", 00:32:43.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.073 "strip_size_kb": 64, 00:32:43.073 "state": "configuring", 00:32:43.073 "raid_level": "raid5f", 00:32:43.073 "superblock": false, 00:32:43.073 "num_base_bdevs": 4, 00:32:43.073 "num_base_bdevs_discovered": 3, 00:32:43.073 "num_base_bdevs_operational": 4, 00:32:43.073 "base_bdevs_list": [ 00:32:43.073 { 00:32:43.073 "name": "BaseBdev1", 00:32:43.073 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:43.073 "is_configured": true, 00:32:43.073 "data_offset": 0, 00:32:43.073 "data_size": 65536 00:32:43.073 }, 00:32:43.073 { 00:32:43.073 "name": null, 00:32:43.073 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:43.073 "is_configured": false, 00:32:43.073 "data_offset": 0, 00:32:43.073 "data_size": 65536 00:32:43.073 }, 00:32:43.073 { 00:32:43.073 "name": "BaseBdev3", 00:32:43.073 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:43.073 "is_configured": true, 00:32:43.073 "data_offset": 0, 00:32:43.073 "data_size": 65536 00:32:43.073 }, 00:32:43.073 { 00:32:43.073 "name": "BaseBdev4", 00:32:43.073 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:43.073 "is_configured": true, 00:32:43.073 "data_offset": 0, 00:32:43.073 "data_size": 65536 00:32:43.073 } 00:32:43.073 ] 00:32:43.073 }' 00:32:43.073 12:14:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:43.073 12:14:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.639 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:43.639 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:32:43.898 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:32:43.898 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:32:44.156 [2024-07-21 12:14:42.839191] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.156 12:14:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:44.414 12:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:44.414 "name": "Existed_Raid", 00:32:44.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.414 "strip_size_kb": 64, 00:32:44.414 "state": "configuring", 00:32:44.414 "raid_level": "raid5f", 00:32:44.414 "superblock": false, 00:32:44.414 "num_base_bdevs": 4, 00:32:44.414 "num_base_bdevs_discovered": 2, 00:32:44.414 "num_base_bdevs_operational": 4, 00:32:44.414 "base_bdevs_list": [ 00:32:44.414 { 00:32:44.414 "name": null, 00:32:44.414 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:44.414 "is_configured": false, 00:32:44.414 "data_offset": 0, 00:32:44.414 "data_size": 65536 00:32:44.414 }, 00:32:44.414 { 00:32:44.414 "name": null, 00:32:44.414 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:44.414 "is_configured": false, 00:32:44.414 "data_offset": 0, 00:32:44.414 "data_size": 65536 00:32:44.414 }, 00:32:44.414 { 00:32:44.414 "name": "BaseBdev3", 00:32:44.414 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:44.414 "is_configured": true, 00:32:44.414 "data_offset": 0, 00:32:44.414 "data_size": 65536 00:32:44.414 }, 00:32:44.414 { 00:32:44.414 "name": "BaseBdev4", 00:32:44.414 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:44.414 "is_configured": true, 00:32:44.414 "data_offset": 0, 00:32:44.414 "data_size": 65536 00:32:44.414 } 00:32:44.414 ] 00:32:44.414 }' 00:32:44.414 12:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:44.414 12:14:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.981 12:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:44.981 12:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:32:44.981 12:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:32:44.981 12:14:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:32:45.238 [2024-07-21 12:14:43.998986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:45.238 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:45.238 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:45.238 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:45.238 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:45.238 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:45.238 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:45.238 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:45.239 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:45.239 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:45.239 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:45.239 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:45.239 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:45.496 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:45.497 "name": "Existed_Raid", 00:32:45.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:45.497 "strip_size_kb": 64, 00:32:45.497 "state": "configuring", 00:32:45.497 "raid_level": "raid5f", 00:32:45.497 "superblock": false, 00:32:45.497 "num_base_bdevs": 4, 00:32:45.497 "num_base_bdevs_discovered": 3, 00:32:45.497 "num_base_bdevs_operational": 4, 00:32:45.497 "base_bdevs_list": [ 00:32:45.497 { 00:32:45.497 "name": null, 00:32:45.497 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:45.497 "is_configured": false, 00:32:45.497 "data_offset": 0, 00:32:45.497 "data_size": 65536 00:32:45.497 }, 00:32:45.497 { 00:32:45.497 "name": "BaseBdev2", 00:32:45.497 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:45.497 "is_configured": true, 00:32:45.497 "data_offset": 0, 00:32:45.497 "data_size": 65536 00:32:45.497 }, 00:32:45.497 { 00:32:45.497 "name": "BaseBdev3", 00:32:45.497 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:45.497 "is_configured": true, 00:32:45.497 "data_offset": 0, 00:32:45.497 "data_size": 65536 00:32:45.497 }, 00:32:45.497 { 00:32:45.497 "name": "BaseBdev4", 00:32:45.497 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:45.497 "is_configured": true, 00:32:45.497 "data_offset": 0, 00:32:45.497 "data_size": 65536 00:32:45.497 } 00:32:45.497 ] 00:32:45.497 }' 00:32:45.497 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:45.497 12:14:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:46.063 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.063 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:32:46.321 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:32:46.322 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:32:46.322 12:14:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:46.580 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 45dbc928-2857-4800-9a46-4a16d139213a 00:32:46.838 [2024-07-21 12:14:45.494226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:32:46.838 [2024-07-21 12:14:45.494501] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:32:46.838 [2024-07-21 12:14:45.494546] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:32:46.838 [2024-07-21 12:14:45.494753] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:32:46.838 [2024-07-21 12:14:45.495590] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:32:46.838 [2024-07-21 12:14:45.495741] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:32:46.838 [2024-07-21 12:14:45.496051] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:46.838 NewBaseBdev 00:32:46.838 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:32:46.838 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:32:46.838 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:46.838 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@897 -- # local i 00:32:46.838 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:46.838 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:46.838 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:47.096 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:32:47.096 [ 00:32:47.096 { 00:32:47.096 "name": "NewBaseBdev", 00:32:47.096 "aliases": [ 00:32:47.096 "45dbc928-2857-4800-9a46-4a16d139213a" 00:32:47.096 ], 00:32:47.096 "product_name": "Malloc disk", 00:32:47.096 "block_size": 512, 00:32:47.096 "num_blocks": 65536, 00:32:47.096 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:47.096 "assigned_rate_limits": { 00:32:47.096 "rw_ios_per_sec": 0, 00:32:47.096 "rw_mbytes_per_sec": 0, 00:32:47.096 "r_mbytes_per_sec": 0, 00:32:47.096 "w_mbytes_per_sec": 0 00:32:47.096 }, 00:32:47.096 "claimed": true, 00:32:47.096 "claim_type": "exclusive_write", 00:32:47.096 "zoned": false, 00:32:47.096 "supported_io_types": { 00:32:47.096 "read": true, 00:32:47.096 "write": true, 00:32:47.096 "unmap": true, 00:32:47.096 "write_zeroes": true, 00:32:47.096 "flush": true, 00:32:47.096 "reset": true, 00:32:47.096 "compare": false, 00:32:47.096 "compare_and_write": false, 00:32:47.096 "abort": true, 00:32:47.096 "nvme_admin": false, 00:32:47.096 "nvme_io": false 00:32:47.096 }, 00:32:47.096 "memory_domains": [ 00:32:47.096 { 00:32:47.096 "dma_device_id": "system", 00:32:47.096 "dma_device_type": 1 00:32:47.096 }, 00:32:47.096 { 00:32:47.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:47.096 "dma_device_type": 2 00:32:47.096 } 00:32:47.096 ], 00:32:47.096 "driver_specific": {} 00:32:47.096 } 00:32:47.096 ] 00:32:47.096 12:14:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # return 0 00:32:47.096 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:32:47.096 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:47.096 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:32:47.096 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:47.097 12:14:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:47.355 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:47.355 "name": "Existed_Raid", 00:32:47.355 "uuid": "6ace21e7-4767-4ee3-9809-cf639b624e24", 00:32:47.355 "strip_size_kb": 64, 00:32:47.355 "state": "online", 00:32:47.355 "raid_level": "raid5f", 00:32:47.355 "superblock": false, 00:32:47.355 "num_base_bdevs": 4, 00:32:47.355 "num_base_bdevs_discovered": 4, 00:32:47.355 "num_base_bdevs_operational": 4, 00:32:47.355 "base_bdevs_list": [ 00:32:47.355 { 00:32:47.355 "name": "NewBaseBdev", 00:32:47.355 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:47.355 "is_configured": true, 00:32:47.355 "data_offset": 0, 00:32:47.355 "data_size": 65536 00:32:47.355 }, 00:32:47.355 { 00:32:47.355 "name": "BaseBdev2", 00:32:47.355 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:47.355 "is_configured": true, 00:32:47.355 "data_offset": 0, 00:32:47.355 "data_size": 65536 00:32:47.355 }, 00:32:47.355 { 00:32:47.355 "name": "BaseBdev3", 00:32:47.355 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:47.355 "is_configured": true, 00:32:47.355 "data_offset": 0, 00:32:47.355 "data_size": 65536 00:32:47.355 }, 00:32:47.355 { 00:32:47.355 "name": "BaseBdev4", 00:32:47.355 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:47.355 "is_configured": true, 00:32:47.355 "data_offset": 0, 00:32:47.355 "data_size": 65536 00:32:47.355 } 00:32:47.355 ] 00:32:47.355 }' 00:32:47.355 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:47.355 12:14:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:47.921 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:32:47.921 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:32:47.921 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:32:47.921 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:32:47.921 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:32:47.921 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:32:47.922 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:32:47.922 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:32:48.180 [2024-07-21 12:14:46.883129] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:48.181 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:32:48.181 "name": "Existed_Raid", 00:32:48.181 "aliases": [ 00:32:48.181 "6ace21e7-4767-4ee3-9809-cf639b624e24" 00:32:48.181 ], 00:32:48.181 "product_name": "Raid Volume", 00:32:48.181 "block_size": 512, 00:32:48.181 "num_blocks": 196608, 00:32:48.181 "uuid": "6ace21e7-4767-4ee3-9809-cf639b624e24", 00:32:48.181 "assigned_rate_limits": { 00:32:48.181 "rw_ios_per_sec": 0, 00:32:48.181 "rw_mbytes_per_sec": 0, 00:32:48.181 "r_mbytes_per_sec": 0, 00:32:48.181 "w_mbytes_per_sec": 0 00:32:48.181 }, 00:32:48.181 "claimed": false, 00:32:48.181 "zoned": false, 00:32:48.181 "supported_io_types": { 00:32:48.181 "read": true, 00:32:48.181 "write": true, 00:32:48.181 "unmap": false, 00:32:48.181 "write_zeroes": true, 00:32:48.181 "flush": false, 00:32:48.181 "reset": true, 00:32:48.181 "compare": false, 00:32:48.181 "compare_and_write": false, 00:32:48.181 "abort": false, 00:32:48.181 "nvme_admin": false, 00:32:48.181 "nvme_io": false 00:32:48.181 }, 00:32:48.181 "driver_specific": { 00:32:48.181 "raid": { 00:32:48.181 "uuid": "6ace21e7-4767-4ee3-9809-cf639b624e24", 00:32:48.181 "strip_size_kb": 64, 00:32:48.181 "state": "online", 00:32:48.181 "raid_level": "raid5f", 00:32:48.181 "superblock": false, 00:32:48.181 "num_base_bdevs": 4, 00:32:48.181 "num_base_bdevs_discovered": 4, 00:32:48.181 "num_base_bdevs_operational": 4, 00:32:48.181 "base_bdevs_list": [ 00:32:48.181 { 00:32:48.181 "name": "NewBaseBdev", 00:32:48.181 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:48.181 "is_configured": true, 00:32:48.181 "data_offset": 0, 00:32:48.181 "data_size": 65536 00:32:48.181 }, 00:32:48.181 { 00:32:48.181 "name": "BaseBdev2", 00:32:48.181 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:48.181 "is_configured": true, 00:32:48.181 "data_offset": 0, 00:32:48.181 "data_size": 65536 00:32:48.181 }, 00:32:48.181 { 00:32:48.181 "name": "BaseBdev3", 00:32:48.181 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:48.181 "is_configured": true, 00:32:48.181 "data_offset": 0, 00:32:48.181 "data_size": 65536 00:32:48.181 }, 00:32:48.181 { 00:32:48.181 "name": "BaseBdev4", 00:32:48.181 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:48.181 "is_configured": true, 00:32:48.181 "data_offset": 0, 00:32:48.181 "data_size": 65536 00:32:48.181 } 00:32:48.181 ] 00:32:48.181 } 00:32:48.181 } 00:32:48.181 }' 00:32:48.181 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:32:48.181 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:32:48.181 BaseBdev2 00:32:48.181 BaseBdev3 00:32:48.181 BaseBdev4' 00:32:48.181 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:48.181 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:32:48.181 12:14:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:48.440 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:48.440 "name": "NewBaseBdev", 00:32:48.440 "aliases": [ 00:32:48.440 "45dbc928-2857-4800-9a46-4a16d139213a" 00:32:48.440 ], 00:32:48.440 "product_name": "Malloc disk", 00:32:48.440 "block_size": 512, 00:32:48.440 "num_blocks": 65536, 00:32:48.440 "uuid": "45dbc928-2857-4800-9a46-4a16d139213a", 00:32:48.440 "assigned_rate_limits": { 00:32:48.440 "rw_ios_per_sec": 0, 00:32:48.440 "rw_mbytes_per_sec": 0, 00:32:48.440 "r_mbytes_per_sec": 0, 00:32:48.440 "w_mbytes_per_sec": 0 00:32:48.440 }, 00:32:48.440 "claimed": true, 00:32:48.440 "claim_type": "exclusive_write", 00:32:48.440 "zoned": false, 00:32:48.440 "supported_io_types": { 00:32:48.440 "read": true, 00:32:48.440 "write": true, 00:32:48.440 "unmap": true, 00:32:48.440 "write_zeroes": true, 00:32:48.440 "flush": true, 00:32:48.440 "reset": true, 00:32:48.440 "compare": false, 00:32:48.440 "compare_and_write": false, 00:32:48.440 "abort": true, 00:32:48.440 "nvme_admin": false, 00:32:48.440 "nvme_io": false 00:32:48.440 }, 00:32:48.440 "memory_domains": [ 00:32:48.440 { 00:32:48.440 "dma_device_id": "system", 00:32:48.440 "dma_device_type": 1 00:32:48.440 }, 00:32:48.440 { 00:32:48.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.440 "dma_device_type": 2 00:32:48.440 } 00:32:48.440 ], 00:32:48.440 "driver_specific": {} 00:32:48.440 }' 00:32:48.440 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:48.440 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:48.440 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:48.440 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:48.440 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:48.698 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:32:48.956 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:48.956 "name": "BaseBdev2", 00:32:48.956 "aliases": [ 00:32:48.956 "0173b7cb-1b3a-4be4-a365-792c891fdb3f" 00:32:48.956 ], 00:32:48.956 "product_name": "Malloc disk", 00:32:48.956 "block_size": 512, 00:32:48.956 "num_blocks": 65536, 00:32:48.956 "uuid": "0173b7cb-1b3a-4be4-a365-792c891fdb3f", 00:32:48.956 "assigned_rate_limits": { 00:32:48.956 "rw_ios_per_sec": 0, 00:32:48.956 "rw_mbytes_per_sec": 0, 00:32:48.956 "r_mbytes_per_sec": 0, 00:32:48.956 "w_mbytes_per_sec": 0 00:32:48.956 }, 00:32:48.956 "claimed": true, 00:32:48.956 "claim_type": "exclusive_write", 00:32:48.956 "zoned": false, 00:32:48.956 "supported_io_types": { 00:32:48.956 "read": true, 00:32:48.956 "write": true, 00:32:48.956 "unmap": true, 00:32:48.956 "write_zeroes": true, 00:32:48.956 "flush": true, 00:32:48.956 "reset": true, 00:32:48.956 "compare": false, 00:32:48.956 "compare_and_write": false, 00:32:48.956 "abort": true, 00:32:48.956 "nvme_admin": false, 00:32:48.956 "nvme_io": false 00:32:48.956 }, 00:32:48.956 "memory_domains": [ 00:32:48.956 { 00:32:48.956 "dma_device_id": "system", 00:32:48.956 "dma_device_type": 1 00:32:48.956 }, 00:32:48.956 { 00:32:48.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:48.956 "dma_device_type": 2 00:32:48.956 } 00:32:48.956 ], 00:32:48.956 "driver_specific": {} 00:32:48.956 }' 00:32:48.956 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:49.214 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:49.214 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:49.214 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:49.214 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:49.214 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:49.214 12:14:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:49.214 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:49.471 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:49.471 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:49.471 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:49.471 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:49.471 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:49.471 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:32:49.471 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:49.729 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:49.729 "name": "BaseBdev3", 00:32:49.729 "aliases": [ 00:32:49.729 "bc36ad28-93b5-4e3e-b96e-33e18de1c71d" 00:32:49.729 ], 00:32:49.729 "product_name": "Malloc disk", 00:32:49.729 "block_size": 512, 00:32:49.729 "num_blocks": 65536, 00:32:49.729 "uuid": "bc36ad28-93b5-4e3e-b96e-33e18de1c71d", 00:32:49.729 "assigned_rate_limits": { 00:32:49.729 "rw_ios_per_sec": 0, 00:32:49.729 "rw_mbytes_per_sec": 0, 00:32:49.729 "r_mbytes_per_sec": 0, 00:32:49.729 "w_mbytes_per_sec": 0 00:32:49.729 }, 00:32:49.729 "claimed": true, 00:32:49.729 "claim_type": "exclusive_write", 00:32:49.729 "zoned": false, 00:32:49.729 "supported_io_types": { 00:32:49.729 "read": true, 00:32:49.729 "write": true, 00:32:49.729 "unmap": true, 00:32:49.729 "write_zeroes": true, 00:32:49.729 "flush": true, 00:32:49.729 "reset": true, 00:32:49.729 "compare": false, 00:32:49.729 "compare_and_write": false, 00:32:49.729 "abort": true, 00:32:49.729 "nvme_admin": false, 00:32:49.729 "nvme_io": false 00:32:49.729 }, 00:32:49.729 "memory_domains": [ 00:32:49.729 { 00:32:49.729 "dma_device_id": "system", 00:32:49.729 "dma_device_type": 1 00:32:49.729 }, 00:32:49.729 { 00:32:49.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:49.729 "dma_device_type": 2 00:32:49.729 } 00:32:49.729 ], 00:32:49.729 "driver_specific": {} 00:32:49.729 }' 00:32:49.729 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:49.729 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:49.729 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:49.729 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:49.987 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:49.987 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:49.987 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:49.987 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:49.987 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:49.987 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:49.987 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:50.244 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:50.244 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:32:50.244 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:32:50.244 12:14:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:32:50.244 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:32:50.244 "name": "BaseBdev4", 00:32:50.244 "aliases": [ 00:32:50.244 "767d1033-8dc3-4841-9dc5-f950e636aef5" 00:32:50.244 ], 00:32:50.244 "product_name": "Malloc disk", 00:32:50.244 "block_size": 512, 00:32:50.244 "num_blocks": 65536, 00:32:50.245 "uuid": "767d1033-8dc3-4841-9dc5-f950e636aef5", 00:32:50.245 "assigned_rate_limits": { 00:32:50.245 "rw_ios_per_sec": 0, 00:32:50.245 "rw_mbytes_per_sec": 0, 00:32:50.245 "r_mbytes_per_sec": 0, 00:32:50.245 "w_mbytes_per_sec": 0 00:32:50.245 }, 00:32:50.245 "claimed": true, 00:32:50.245 "claim_type": "exclusive_write", 00:32:50.245 "zoned": false, 00:32:50.245 "supported_io_types": { 00:32:50.245 "read": true, 00:32:50.245 "write": true, 00:32:50.245 "unmap": true, 00:32:50.245 "write_zeroes": true, 00:32:50.245 "flush": true, 00:32:50.245 "reset": true, 00:32:50.245 "compare": false, 00:32:50.245 "compare_and_write": false, 00:32:50.245 "abort": true, 00:32:50.245 "nvme_admin": false, 00:32:50.245 "nvme_io": false 00:32:50.245 }, 00:32:50.245 "memory_domains": [ 00:32:50.245 { 00:32:50.245 "dma_device_id": "system", 00:32:50.245 "dma_device_type": 1 00:32:50.245 }, 00:32:50.245 { 00:32:50.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:50.245 "dma_device_type": 2 00:32:50.245 } 00:32:50.245 ], 00:32:50.245 "driver_specific": {} 00:32:50.245 }' 00:32:50.245 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:50.245 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:32:50.503 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:50.761 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:32:50.761 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:32:50.761 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:51.019 [2024-07-21 12:14:49.727584] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:51.019 [2024-07-21 12:14:49.727743] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:51.019 [2024-07-21 12:14:49.727945] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:51.019 [2024-07-21 12:14:49.728399] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:51.019 [2024-07-21 12:14:49.728561] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 164346 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@946 -- # '[' -z 164346 ']' 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # kill -0 164346 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # uname 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 164346 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 164346' 00:32:51.019 killing process with pid 164346 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@965 -- # kill 164346 00:32:51.019 [2024-07-21 12:14:49.773255] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:51.019 12:14:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # wait 164346 00:32:51.019 [2024-07-21 12:14:49.820669] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:51.276 12:14:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:32:51.276 00:32:51.276 real 0m31.602s 00:32:51.276 user 1m0.357s 00:32:51.276 sys 0m3.555s 00:32:51.276 12:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:32:51.276 12:14:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:32:51.276 ************************************ 00:32:51.276 END TEST raid5f_state_function_test 00:32:51.276 ************************************ 00:32:51.534 12:14:50 bdev_raid -- bdev/bdev_raid.sh@887 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:32:51.534 12:14:50 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:32:51.534 12:14:50 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:51.534 12:14:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:51.534 ************************************ 00:32:51.534 START TEST raid5f_state_function_test_sb 00:32:51.534 ************************************ 00:32:51.534 12:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1121 -- # raid_state_function_test raid5f 4 true 00:32:51.534 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid5f 00:32:51.534 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:32:51.534 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:32:51.534 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:32:51.534 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev3 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # echo BaseBdev4 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid5f '!=' raid1 ']' 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=165410 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:32:51.535 Process raid pid: 165410 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 165410' 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 165410 /var/tmp/spdk-raid.sock 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@827 -- # '[' -z 165410 ']' 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:32:51.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:32:51.535 12:14:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:51.535 [2024-07-21 12:14:50.276965] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:32:51.535 [2024-07-21 12:14:50.277408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.792 [2024-07-21 12:14:50.447028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.792 [2024-07-21 12:14:50.520380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.792 [2024-07-21 12:14:50.581195] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # return 0 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:52.725 [2024-07-21 12:14:51.415506] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:52.725 [2024-07-21 12:14:51.415716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:52.725 [2024-07-21 12:14:51.415825] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:52.725 [2024-07-21 12:14:51.415884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:52.725 [2024-07-21 12:14:51.415971] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:52.725 [2024-07-21 12:14:51.416109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:52.725 [2024-07-21 12:14:51.416203] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:52.725 [2024-07-21 12:14:51.416260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:52.725 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:52.726 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:52.726 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:52.983 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:52.983 "name": "Existed_Raid", 00:32:52.983 "uuid": "d68397c1-2da4-4c45-a67d-0bfe07a8bc04", 00:32:52.983 "strip_size_kb": 64, 00:32:52.983 "state": "configuring", 00:32:52.983 "raid_level": "raid5f", 00:32:52.983 "superblock": true, 00:32:52.983 "num_base_bdevs": 4, 00:32:52.983 "num_base_bdevs_discovered": 0, 00:32:52.983 "num_base_bdevs_operational": 4, 00:32:52.983 "base_bdevs_list": [ 00:32:52.983 { 00:32:52.983 "name": "BaseBdev1", 00:32:52.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.983 "is_configured": false, 00:32:52.983 "data_offset": 0, 00:32:52.983 "data_size": 0 00:32:52.983 }, 00:32:52.983 { 00:32:52.983 "name": "BaseBdev2", 00:32:52.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.983 "is_configured": false, 00:32:52.983 "data_offset": 0, 00:32:52.983 "data_size": 0 00:32:52.983 }, 00:32:52.983 { 00:32:52.983 "name": "BaseBdev3", 00:32:52.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.983 "is_configured": false, 00:32:52.983 "data_offset": 0, 00:32:52.983 "data_size": 0 00:32:52.983 }, 00:32:52.983 { 00:32:52.983 "name": "BaseBdev4", 00:32:52.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:52.984 "is_configured": false, 00:32:52.984 "data_offset": 0, 00:32:52.984 "data_size": 0 00:32:52.984 } 00:32:52.984 ] 00:32:52.984 }' 00:32:52.984 12:14:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:52.984 12:14:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:53.549 12:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:53.806 [2024-07-21 12:14:52.559541] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:53.806 [2024-07-21 12:14:52.559711] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:32:53.806 12:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:54.064 [2024-07-21 12:14:52.759596] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:32:54.064 [2024-07-21 12:14:52.759765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:32:54.064 [2024-07-21 12:14:52.759862] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:54.064 [2024-07-21 12:14:52.760012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:54.064 [2024-07-21 12:14:52.760109] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:54.064 [2024-07-21 12:14:52.760163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:54.064 [2024-07-21 12:14:52.760246] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:54.064 [2024-07-21 12:14:52.760303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:54.064 12:14:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:32:54.321 [2024-07-21 12:14:53.014110] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:54.321 BaseBdev1 00:32:54.321 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:32:54.321 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:32:54.321 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:54.321 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:54.321 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:54.321 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:54.321 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:54.578 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:32:54.579 [ 00:32:54.579 { 00:32:54.579 "name": "BaseBdev1", 00:32:54.579 "aliases": [ 00:32:54.579 "54c98072-0854-46fd-9382-c37b10fac3d6" 00:32:54.579 ], 00:32:54.579 "product_name": "Malloc disk", 00:32:54.579 "block_size": 512, 00:32:54.579 "num_blocks": 65536, 00:32:54.579 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:32:54.579 "assigned_rate_limits": { 00:32:54.579 "rw_ios_per_sec": 0, 00:32:54.579 "rw_mbytes_per_sec": 0, 00:32:54.579 "r_mbytes_per_sec": 0, 00:32:54.579 "w_mbytes_per_sec": 0 00:32:54.579 }, 00:32:54.579 "claimed": true, 00:32:54.579 "claim_type": "exclusive_write", 00:32:54.579 "zoned": false, 00:32:54.579 "supported_io_types": { 00:32:54.579 "read": true, 00:32:54.579 "write": true, 00:32:54.579 "unmap": true, 00:32:54.579 "write_zeroes": true, 00:32:54.579 "flush": true, 00:32:54.579 "reset": true, 00:32:54.579 "compare": false, 00:32:54.579 "compare_and_write": false, 00:32:54.579 "abort": true, 00:32:54.579 "nvme_admin": false, 00:32:54.579 "nvme_io": false 00:32:54.579 }, 00:32:54.579 "memory_domains": [ 00:32:54.579 { 00:32:54.579 "dma_device_id": "system", 00:32:54.579 "dma_device_type": 1 00:32:54.579 }, 00:32:54.579 { 00:32:54.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:54.579 "dma_device_type": 2 00:32:54.579 } 00:32:54.579 ], 00:32:54.579 "driver_specific": {} 00:32:54.579 } 00:32:54.579 ] 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:54.579 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:54.836 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:54.836 "name": "Existed_Raid", 00:32:54.836 "uuid": "fb9ded5c-d2ba-41c7-bb1c-d8ec1bb7dfb1", 00:32:54.836 "strip_size_kb": 64, 00:32:54.836 "state": "configuring", 00:32:54.836 "raid_level": "raid5f", 00:32:54.836 "superblock": true, 00:32:54.836 "num_base_bdevs": 4, 00:32:54.836 "num_base_bdevs_discovered": 1, 00:32:54.836 "num_base_bdevs_operational": 4, 00:32:54.836 "base_bdevs_list": [ 00:32:54.836 { 00:32:54.836 "name": "BaseBdev1", 00:32:54.836 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:32:54.836 "is_configured": true, 00:32:54.836 "data_offset": 2048, 00:32:54.836 "data_size": 63488 00:32:54.836 }, 00:32:54.836 { 00:32:54.836 "name": "BaseBdev2", 00:32:54.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.836 "is_configured": false, 00:32:54.836 "data_offset": 0, 00:32:54.836 "data_size": 0 00:32:54.836 }, 00:32:54.836 { 00:32:54.836 "name": "BaseBdev3", 00:32:54.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.836 "is_configured": false, 00:32:54.836 "data_offset": 0, 00:32:54.836 "data_size": 0 00:32:54.836 }, 00:32:54.836 { 00:32:54.836 "name": "BaseBdev4", 00:32:54.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:54.836 "is_configured": false, 00:32:54.836 "data_offset": 0, 00:32:54.836 "data_size": 0 00:32:54.836 } 00:32:54.836 ] 00:32:54.836 }' 00:32:54.836 12:14:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:54.836 12:14:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:55.402 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:32:55.660 [2024-07-21 12:14:54.422369] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:32:55.660 [2024-07-21 12:14:54.422554] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:32:55.660 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:32:55.917 [2024-07-21 12:14:54.694478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:55.917 [2024-07-21 12:14:54.696344] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:32:55.917 [2024-07-21 12:14:54.696519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:32:55.917 [2024-07-21 12:14:54.696620] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:32:55.917 [2024-07-21 12:14:54.696678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:32:55.917 [2024-07-21 12:14:54.696766] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:32:55.917 [2024-07-21 12:14:54.696888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:55.917 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:55.918 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:55.918 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:56.175 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:56.175 "name": "Existed_Raid", 00:32:56.175 "uuid": "307d193b-2548-4b06-9ffc-dd345f336765", 00:32:56.175 "strip_size_kb": 64, 00:32:56.175 "state": "configuring", 00:32:56.175 "raid_level": "raid5f", 00:32:56.175 "superblock": true, 00:32:56.175 "num_base_bdevs": 4, 00:32:56.175 "num_base_bdevs_discovered": 1, 00:32:56.175 "num_base_bdevs_operational": 4, 00:32:56.175 "base_bdevs_list": [ 00:32:56.175 { 00:32:56.175 "name": "BaseBdev1", 00:32:56.175 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:32:56.175 "is_configured": true, 00:32:56.175 "data_offset": 2048, 00:32:56.175 "data_size": 63488 00:32:56.175 }, 00:32:56.175 { 00:32:56.175 "name": "BaseBdev2", 00:32:56.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.175 "is_configured": false, 00:32:56.175 "data_offset": 0, 00:32:56.175 "data_size": 0 00:32:56.175 }, 00:32:56.175 { 00:32:56.175 "name": "BaseBdev3", 00:32:56.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.175 "is_configured": false, 00:32:56.175 "data_offset": 0, 00:32:56.175 "data_size": 0 00:32:56.175 }, 00:32:56.175 { 00:32:56.175 "name": "BaseBdev4", 00:32:56.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:56.175 "is_configured": false, 00:32:56.175 "data_offset": 0, 00:32:56.175 "data_size": 0 00:32:56.175 } 00:32:56.175 ] 00:32:56.175 }' 00:32:56.175 12:14:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:56.175 12:14:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:56.741 12:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:32:56.999 [2024-07-21 12:14:55.735320] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:56.999 BaseBdev2 00:32:56.999 12:14:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:32:56.999 12:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:32:56.999 12:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:56.999 12:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:56.999 12:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:56.999 12:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:56.999 12:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:57.257 12:14:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:32:57.515 [ 00:32:57.515 { 00:32:57.515 "name": "BaseBdev2", 00:32:57.515 "aliases": [ 00:32:57.515 "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6" 00:32:57.515 ], 00:32:57.515 "product_name": "Malloc disk", 00:32:57.515 "block_size": 512, 00:32:57.515 "num_blocks": 65536, 00:32:57.515 "uuid": "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6", 00:32:57.515 "assigned_rate_limits": { 00:32:57.515 "rw_ios_per_sec": 0, 00:32:57.515 "rw_mbytes_per_sec": 0, 00:32:57.515 "r_mbytes_per_sec": 0, 00:32:57.515 "w_mbytes_per_sec": 0 00:32:57.515 }, 00:32:57.515 "claimed": true, 00:32:57.515 "claim_type": "exclusive_write", 00:32:57.515 "zoned": false, 00:32:57.515 "supported_io_types": { 00:32:57.515 "read": true, 00:32:57.515 "write": true, 00:32:57.515 "unmap": true, 00:32:57.515 "write_zeroes": true, 00:32:57.515 "flush": true, 00:32:57.515 "reset": true, 00:32:57.515 "compare": false, 00:32:57.515 "compare_and_write": false, 00:32:57.515 "abort": true, 00:32:57.515 "nvme_admin": false, 00:32:57.515 "nvme_io": false 00:32:57.515 }, 00:32:57.515 "memory_domains": [ 00:32:57.515 { 00:32:57.515 "dma_device_id": "system", 00:32:57.515 "dma_device_type": 1 00:32:57.515 }, 00:32:57.515 { 00:32:57.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:57.515 "dma_device_type": 2 00:32:57.515 } 00:32:57.515 ], 00:32:57.515 "driver_specific": {} 00:32:57.515 } 00:32:57.515 ] 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:57.515 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:57.774 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:57.774 "name": "Existed_Raid", 00:32:57.774 "uuid": "307d193b-2548-4b06-9ffc-dd345f336765", 00:32:57.774 "strip_size_kb": 64, 00:32:57.774 "state": "configuring", 00:32:57.774 "raid_level": "raid5f", 00:32:57.774 "superblock": true, 00:32:57.774 "num_base_bdevs": 4, 00:32:57.774 "num_base_bdevs_discovered": 2, 00:32:57.774 "num_base_bdevs_operational": 4, 00:32:57.774 "base_bdevs_list": [ 00:32:57.774 { 00:32:57.774 "name": "BaseBdev1", 00:32:57.774 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:32:57.774 "is_configured": true, 00:32:57.774 "data_offset": 2048, 00:32:57.774 "data_size": 63488 00:32:57.774 }, 00:32:57.774 { 00:32:57.774 "name": "BaseBdev2", 00:32:57.774 "uuid": "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6", 00:32:57.774 "is_configured": true, 00:32:57.774 "data_offset": 2048, 00:32:57.774 "data_size": 63488 00:32:57.774 }, 00:32:57.774 { 00:32:57.774 "name": "BaseBdev3", 00:32:57.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.774 "is_configured": false, 00:32:57.774 "data_offset": 0, 00:32:57.774 "data_size": 0 00:32:57.774 }, 00:32:57.774 { 00:32:57.774 "name": "BaseBdev4", 00:32:57.774 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.774 "is_configured": false, 00:32:57.774 "data_offset": 0, 00:32:57.774 "data_size": 0 00:32:57.774 } 00:32:57.774 ] 00:32:57.774 }' 00:32:57.774 12:14:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:57.774 12:14:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:58.341 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:32:58.600 [2024-07-21 12:14:57.371163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:58.600 BaseBdev3 00:32:58.600 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:32:58.600 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:32:58.600 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:58.600 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:58.600 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:58.600 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:58.600 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:32:58.858 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:32:59.116 [ 00:32:59.116 { 00:32:59.116 "name": "BaseBdev3", 00:32:59.116 "aliases": [ 00:32:59.116 "4e9854a0-2cbc-4f27-8f69-e0ea92430681" 00:32:59.116 ], 00:32:59.116 "product_name": "Malloc disk", 00:32:59.116 "block_size": 512, 00:32:59.116 "num_blocks": 65536, 00:32:59.116 "uuid": "4e9854a0-2cbc-4f27-8f69-e0ea92430681", 00:32:59.116 "assigned_rate_limits": { 00:32:59.116 "rw_ios_per_sec": 0, 00:32:59.116 "rw_mbytes_per_sec": 0, 00:32:59.116 "r_mbytes_per_sec": 0, 00:32:59.116 "w_mbytes_per_sec": 0 00:32:59.116 }, 00:32:59.116 "claimed": true, 00:32:59.116 "claim_type": "exclusive_write", 00:32:59.116 "zoned": false, 00:32:59.116 "supported_io_types": { 00:32:59.116 "read": true, 00:32:59.116 "write": true, 00:32:59.116 "unmap": true, 00:32:59.116 "write_zeroes": true, 00:32:59.116 "flush": true, 00:32:59.116 "reset": true, 00:32:59.116 "compare": false, 00:32:59.116 "compare_and_write": false, 00:32:59.116 "abort": true, 00:32:59.116 "nvme_admin": false, 00:32:59.116 "nvme_io": false 00:32:59.116 }, 00:32:59.116 "memory_domains": [ 00:32:59.116 { 00:32:59.116 "dma_device_id": "system", 00:32:59.116 "dma_device_type": 1 00:32:59.116 }, 00:32:59.116 { 00:32:59.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:32:59.116 "dma_device_type": 2 00:32:59.116 } 00:32:59.116 ], 00:32:59.116 "driver_specific": {} 00:32:59.116 } 00:32:59.116 ] 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:32:59.116 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:32:59.375 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:32:59.375 "name": "Existed_Raid", 00:32:59.375 "uuid": "307d193b-2548-4b06-9ffc-dd345f336765", 00:32:59.375 "strip_size_kb": 64, 00:32:59.375 "state": "configuring", 00:32:59.375 "raid_level": "raid5f", 00:32:59.375 "superblock": true, 00:32:59.375 "num_base_bdevs": 4, 00:32:59.375 "num_base_bdevs_discovered": 3, 00:32:59.375 "num_base_bdevs_operational": 4, 00:32:59.375 "base_bdevs_list": [ 00:32:59.375 { 00:32:59.375 "name": "BaseBdev1", 00:32:59.375 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:32:59.375 "is_configured": true, 00:32:59.375 "data_offset": 2048, 00:32:59.375 "data_size": 63488 00:32:59.375 }, 00:32:59.375 { 00:32:59.375 "name": "BaseBdev2", 00:32:59.375 "uuid": "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6", 00:32:59.375 "is_configured": true, 00:32:59.375 "data_offset": 2048, 00:32:59.375 "data_size": 63488 00:32:59.375 }, 00:32:59.375 { 00:32:59.375 "name": "BaseBdev3", 00:32:59.375 "uuid": "4e9854a0-2cbc-4f27-8f69-e0ea92430681", 00:32:59.375 "is_configured": true, 00:32:59.375 "data_offset": 2048, 00:32:59.375 "data_size": 63488 00:32:59.375 }, 00:32:59.375 { 00:32:59.375 "name": "BaseBdev4", 00:32:59.375 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.375 "is_configured": false, 00:32:59.375 "data_offset": 0, 00:32:59.375 "data_size": 0 00:32:59.375 } 00:32:59.375 ] 00:32:59.375 }' 00:32:59.375 12:14:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:32:59.375 12:14:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:32:59.943 [2024-07-21 12:14:58.770917] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:59.943 [2024-07-21 12:14:58.771346] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:32:59.943 [2024-07-21 12:14:58.771498] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:32:59.943 [2024-07-21 12:14:58.771669] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:32:59.943 BaseBdev4 00:32:59.943 [2024-07-21 12:14:58.772523] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:32:59.943 [2024-07-21 12:14:58.772673] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:32:59.943 [2024-07-21 12:14:58.772932] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:32:59.943 12:14:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:00.201 12:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:00.459 [ 00:33:00.459 { 00:33:00.459 "name": "BaseBdev4", 00:33:00.459 "aliases": [ 00:33:00.459 "9252de64-0f66-40b5-bccd-2a1951024f8c" 00:33:00.459 ], 00:33:00.459 "product_name": "Malloc disk", 00:33:00.459 "block_size": 512, 00:33:00.459 "num_blocks": 65536, 00:33:00.459 "uuid": "9252de64-0f66-40b5-bccd-2a1951024f8c", 00:33:00.459 "assigned_rate_limits": { 00:33:00.459 "rw_ios_per_sec": 0, 00:33:00.459 "rw_mbytes_per_sec": 0, 00:33:00.459 "r_mbytes_per_sec": 0, 00:33:00.459 "w_mbytes_per_sec": 0 00:33:00.459 }, 00:33:00.459 "claimed": true, 00:33:00.459 "claim_type": "exclusive_write", 00:33:00.459 "zoned": false, 00:33:00.459 "supported_io_types": { 00:33:00.459 "read": true, 00:33:00.459 "write": true, 00:33:00.459 "unmap": true, 00:33:00.459 "write_zeroes": true, 00:33:00.459 "flush": true, 00:33:00.459 "reset": true, 00:33:00.459 "compare": false, 00:33:00.459 "compare_and_write": false, 00:33:00.459 "abort": true, 00:33:00.459 "nvme_admin": false, 00:33:00.459 "nvme_io": false 00:33:00.459 }, 00:33:00.459 "memory_domains": [ 00:33:00.459 { 00:33:00.459 "dma_device_id": "system", 00:33:00.459 "dma_device_type": 1 00:33:00.459 }, 00:33:00.459 { 00:33:00.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:00.459 "dma_device_type": 2 00:33:00.459 } 00:33:00.459 ], 00:33:00.459 "driver_specific": {} 00:33:00.459 } 00:33:00.459 ] 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:00.459 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:00.718 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:00.718 "name": "Existed_Raid", 00:33:00.718 "uuid": "307d193b-2548-4b06-9ffc-dd345f336765", 00:33:00.718 "strip_size_kb": 64, 00:33:00.718 "state": "online", 00:33:00.718 "raid_level": "raid5f", 00:33:00.718 "superblock": true, 00:33:00.718 "num_base_bdevs": 4, 00:33:00.718 "num_base_bdevs_discovered": 4, 00:33:00.718 "num_base_bdevs_operational": 4, 00:33:00.718 "base_bdevs_list": [ 00:33:00.718 { 00:33:00.718 "name": "BaseBdev1", 00:33:00.718 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:33:00.718 "is_configured": true, 00:33:00.718 "data_offset": 2048, 00:33:00.718 "data_size": 63488 00:33:00.718 }, 00:33:00.718 { 00:33:00.718 "name": "BaseBdev2", 00:33:00.718 "uuid": "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6", 00:33:00.718 "is_configured": true, 00:33:00.718 "data_offset": 2048, 00:33:00.718 "data_size": 63488 00:33:00.718 }, 00:33:00.718 { 00:33:00.718 "name": "BaseBdev3", 00:33:00.718 "uuid": "4e9854a0-2cbc-4f27-8f69-e0ea92430681", 00:33:00.718 "is_configured": true, 00:33:00.718 "data_offset": 2048, 00:33:00.718 "data_size": 63488 00:33:00.718 }, 00:33:00.718 { 00:33:00.718 "name": "BaseBdev4", 00:33:00.718 "uuid": "9252de64-0f66-40b5-bccd-2a1951024f8c", 00:33:00.718 "is_configured": true, 00:33:00.718 "data_offset": 2048, 00:33:00.718 "data_size": 63488 00:33:00.718 } 00:33:00.718 ] 00:33:00.718 }' 00:33:00.718 12:14:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:00.718 12:14:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:01.286 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:01.544 [2024-07-21 12:15:00.264068] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:01.545 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:01.545 "name": "Existed_Raid", 00:33:01.545 "aliases": [ 00:33:01.545 "307d193b-2548-4b06-9ffc-dd345f336765" 00:33:01.545 ], 00:33:01.545 "product_name": "Raid Volume", 00:33:01.545 "block_size": 512, 00:33:01.545 "num_blocks": 190464, 00:33:01.545 "uuid": "307d193b-2548-4b06-9ffc-dd345f336765", 00:33:01.545 "assigned_rate_limits": { 00:33:01.545 "rw_ios_per_sec": 0, 00:33:01.545 "rw_mbytes_per_sec": 0, 00:33:01.545 "r_mbytes_per_sec": 0, 00:33:01.545 "w_mbytes_per_sec": 0 00:33:01.545 }, 00:33:01.545 "claimed": false, 00:33:01.545 "zoned": false, 00:33:01.545 "supported_io_types": { 00:33:01.545 "read": true, 00:33:01.545 "write": true, 00:33:01.545 "unmap": false, 00:33:01.545 "write_zeroes": true, 00:33:01.545 "flush": false, 00:33:01.545 "reset": true, 00:33:01.545 "compare": false, 00:33:01.545 "compare_and_write": false, 00:33:01.545 "abort": false, 00:33:01.545 "nvme_admin": false, 00:33:01.545 "nvme_io": false 00:33:01.545 }, 00:33:01.545 "driver_specific": { 00:33:01.545 "raid": { 00:33:01.545 "uuid": "307d193b-2548-4b06-9ffc-dd345f336765", 00:33:01.545 "strip_size_kb": 64, 00:33:01.545 "state": "online", 00:33:01.545 "raid_level": "raid5f", 00:33:01.545 "superblock": true, 00:33:01.545 "num_base_bdevs": 4, 00:33:01.545 "num_base_bdevs_discovered": 4, 00:33:01.545 "num_base_bdevs_operational": 4, 00:33:01.545 "base_bdevs_list": [ 00:33:01.545 { 00:33:01.545 "name": "BaseBdev1", 00:33:01.545 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:33:01.545 "is_configured": true, 00:33:01.545 "data_offset": 2048, 00:33:01.545 "data_size": 63488 00:33:01.545 }, 00:33:01.545 { 00:33:01.545 "name": "BaseBdev2", 00:33:01.545 "uuid": "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6", 00:33:01.545 "is_configured": true, 00:33:01.545 "data_offset": 2048, 00:33:01.545 "data_size": 63488 00:33:01.545 }, 00:33:01.545 { 00:33:01.545 "name": "BaseBdev3", 00:33:01.545 "uuid": "4e9854a0-2cbc-4f27-8f69-e0ea92430681", 00:33:01.545 "is_configured": true, 00:33:01.545 "data_offset": 2048, 00:33:01.545 "data_size": 63488 00:33:01.545 }, 00:33:01.545 { 00:33:01.545 "name": "BaseBdev4", 00:33:01.545 "uuid": "9252de64-0f66-40b5-bccd-2a1951024f8c", 00:33:01.545 "is_configured": true, 00:33:01.545 "data_offset": 2048, 00:33:01.545 "data_size": 63488 00:33:01.545 } 00:33:01.545 ] 00:33:01.545 } 00:33:01.545 } 00:33:01.545 }' 00:33:01.545 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:01.545 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:33:01.545 BaseBdev2 00:33:01.545 BaseBdev3 00:33:01.545 BaseBdev4' 00:33:01.545 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:01.545 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:33:01.545 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:01.804 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:01.804 "name": "BaseBdev1", 00:33:01.804 "aliases": [ 00:33:01.804 "54c98072-0854-46fd-9382-c37b10fac3d6" 00:33:01.804 ], 00:33:01.804 "product_name": "Malloc disk", 00:33:01.804 "block_size": 512, 00:33:01.804 "num_blocks": 65536, 00:33:01.804 "uuid": "54c98072-0854-46fd-9382-c37b10fac3d6", 00:33:01.804 "assigned_rate_limits": { 00:33:01.804 "rw_ios_per_sec": 0, 00:33:01.804 "rw_mbytes_per_sec": 0, 00:33:01.804 "r_mbytes_per_sec": 0, 00:33:01.804 "w_mbytes_per_sec": 0 00:33:01.804 }, 00:33:01.804 "claimed": true, 00:33:01.804 "claim_type": "exclusive_write", 00:33:01.804 "zoned": false, 00:33:01.804 "supported_io_types": { 00:33:01.804 "read": true, 00:33:01.804 "write": true, 00:33:01.804 "unmap": true, 00:33:01.804 "write_zeroes": true, 00:33:01.804 "flush": true, 00:33:01.804 "reset": true, 00:33:01.804 "compare": false, 00:33:01.804 "compare_and_write": false, 00:33:01.804 "abort": true, 00:33:01.804 "nvme_admin": false, 00:33:01.804 "nvme_io": false 00:33:01.804 }, 00:33:01.804 "memory_domains": [ 00:33:01.804 { 00:33:01.804 "dma_device_id": "system", 00:33:01.804 "dma_device_type": 1 00:33:01.804 }, 00:33:01.804 { 00:33:01.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:01.804 "dma_device_type": 2 00:33:01.804 } 00:33:01.804 ], 00:33:01.804 "driver_specific": {} 00:33:01.804 }' 00:33:01.804 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.804 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:01.804 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:01.804 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:02.063 12:15:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:02.631 "name": "BaseBdev2", 00:33:02.631 "aliases": [ 00:33:02.631 "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6" 00:33:02.631 ], 00:33:02.631 "product_name": "Malloc disk", 00:33:02.631 "block_size": 512, 00:33:02.631 "num_blocks": 65536, 00:33:02.631 "uuid": "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6", 00:33:02.631 "assigned_rate_limits": { 00:33:02.631 "rw_ios_per_sec": 0, 00:33:02.631 "rw_mbytes_per_sec": 0, 00:33:02.631 "r_mbytes_per_sec": 0, 00:33:02.631 "w_mbytes_per_sec": 0 00:33:02.631 }, 00:33:02.631 "claimed": true, 00:33:02.631 "claim_type": "exclusive_write", 00:33:02.631 "zoned": false, 00:33:02.631 "supported_io_types": { 00:33:02.631 "read": true, 00:33:02.631 "write": true, 00:33:02.631 "unmap": true, 00:33:02.631 "write_zeroes": true, 00:33:02.631 "flush": true, 00:33:02.631 "reset": true, 00:33:02.631 "compare": false, 00:33:02.631 "compare_and_write": false, 00:33:02.631 "abort": true, 00:33:02.631 "nvme_admin": false, 00:33:02.631 "nvme_io": false 00:33:02.631 }, 00:33:02.631 "memory_domains": [ 00:33:02.631 { 00:33:02.631 "dma_device_id": "system", 00:33:02.631 "dma_device_type": 1 00:33:02.631 }, 00:33:02.631 { 00:33:02.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:02.631 "dma_device_type": 2 00:33:02.631 } 00:33:02.631 ], 00:33:02.631 "driver_specific": {} 00:33:02.631 }' 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:02.631 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:02.889 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:02.889 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:02.889 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:02.889 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:02.889 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:02.889 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:02.889 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:03.147 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:03.147 "name": "BaseBdev3", 00:33:03.147 "aliases": [ 00:33:03.147 "4e9854a0-2cbc-4f27-8f69-e0ea92430681" 00:33:03.147 ], 00:33:03.147 "product_name": "Malloc disk", 00:33:03.147 "block_size": 512, 00:33:03.147 "num_blocks": 65536, 00:33:03.147 "uuid": "4e9854a0-2cbc-4f27-8f69-e0ea92430681", 00:33:03.147 "assigned_rate_limits": { 00:33:03.147 "rw_ios_per_sec": 0, 00:33:03.147 "rw_mbytes_per_sec": 0, 00:33:03.147 "r_mbytes_per_sec": 0, 00:33:03.147 "w_mbytes_per_sec": 0 00:33:03.147 }, 00:33:03.147 "claimed": true, 00:33:03.147 "claim_type": "exclusive_write", 00:33:03.147 "zoned": false, 00:33:03.147 "supported_io_types": { 00:33:03.147 "read": true, 00:33:03.147 "write": true, 00:33:03.147 "unmap": true, 00:33:03.147 "write_zeroes": true, 00:33:03.147 "flush": true, 00:33:03.147 "reset": true, 00:33:03.147 "compare": false, 00:33:03.147 "compare_and_write": false, 00:33:03.147 "abort": true, 00:33:03.147 "nvme_admin": false, 00:33:03.147 "nvme_io": false 00:33:03.147 }, 00:33:03.147 "memory_domains": [ 00:33:03.147 { 00:33:03.147 "dma_device_id": "system", 00:33:03.147 "dma_device_type": 1 00:33:03.147 }, 00:33:03.147 { 00:33:03.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.147 "dma_device_type": 2 00:33:03.147 } 00:33:03.147 ], 00:33:03.147 "driver_specific": {} 00:33:03.147 }' 00:33:03.147 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:03.147 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:03.147 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:03.147 12:15:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:03.405 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:03.405 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:03.405 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:03.405 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:03.405 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:03.405 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:03.405 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:03.663 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:03.663 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:03.663 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:03.663 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:03.921 "name": "BaseBdev4", 00:33:03.921 "aliases": [ 00:33:03.921 "9252de64-0f66-40b5-bccd-2a1951024f8c" 00:33:03.921 ], 00:33:03.921 "product_name": "Malloc disk", 00:33:03.921 "block_size": 512, 00:33:03.921 "num_blocks": 65536, 00:33:03.921 "uuid": "9252de64-0f66-40b5-bccd-2a1951024f8c", 00:33:03.921 "assigned_rate_limits": { 00:33:03.921 "rw_ios_per_sec": 0, 00:33:03.921 "rw_mbytes_per_sec": 0, 00:33:03.921 "r_mbytes_per_sec": 0, 00:33:03.921 "w_mbytes_per_sec": 0 00:33:03.921 }, 00:33:03.921 "claimed": true, 00:33:03.921 "claim_type": "exclusive_write", 00:33:03.921 "zoned": false, 00:33:03.921 "supported_io_types": { 00:33:03.921 "read": true, 00:33:03.921 "write": true, 00:33:03.921 "unmap": true, 00:33:03.921 "write_zeroes": true, 00:33:03.921 "flush": true, 00:33:03.921 "reset": true, 00:33:03.921 "compare": false, 00:33:03.921 "compare_and_write": false, 00:33:03.921 "abort": true, 00:33:03.921 "nvme_admin": false, 00:33:03.921 "nvme_io": false 00:33:03.921 }, 00:33:03.921 "memory_domains": [ 00:33:03.921 { 00:33:03.921 "dma_device_id": "system", 00:33:03.921 "dma_device_type": 1 00:33:03.921 }, 00:33:03.921 { 00:33:03.921 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:03.921 "dma_device_type": 2 00:33:03.921 } 00:33:03.921 ], 00:33:03.921 "driver_specific": {} 00:33:03.921 }' 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:03.921 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:04.180 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:04.180 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:04.180 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:04.180 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:04.180 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:04.180 12:15:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:04.438 [2024-07-21 12:15:03.216649] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid5f 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:04.438 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:04.706 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:04.706 "name": "Existed_Raid", 00:33:04.706 "uuid": "307d193b-2548-4b06-9ffc-dd345f336765", 00:33:04.706 "strip_size_kb": 64, 00:33:04.706 "state": "online", 00:33:04.706 "raid_level": "raid5f", 00:33:04.706 "superblock": true, 00:33:04.706 "num_base_bdevs": 4, 00:33:04.706 "num_base_bdevs_discovered": 3, 00:33:04.706 "num_base_bdevs_operational": 3, 00:33:04.706 "base_bdevs_list": [ 00:33:04.706 { 00:33:04.706 "name": null, 00:33:04.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:04.706 "is_configured": false, 00:33:04.706 "data_offset": 2048, 00:33:04.706 "data_size": 63488 00:33:04.706 }, 00:33:04.706 { 00:33:04.706 "name": "BaseBdev2", 00:33:04.706 "uuid": "045bf7c9-fe3d-4fe7-91dc-2f00c5ee0ff6", 00:33:04.706 "is_configured": true, 00:33:04.706 "data_offset": 2048, 00:33:04.706 "data_size": 63488 00:33:04.706 }, 00:33:04.706 { 00:33:04.706 "name": "BaseBdev3", 00:33:04.706 "uuid": "4e9854a0-2cbc-4f27-8f69-e0ea92430681", 00:33:04.706 "is_configured": true, 00:33:04.706 "data_offset": 2048, 00:33:04.706 "data_size": 63488 00:33:04.706 }, 00:33:04.706 { 00:33:04.706 "name": "BaseBdev4", 00:33:04.706 "uuid": "9252de64-0f66-40b5-bccd-2a1951024f8c", 00:33:04.706 "is_configured": true, 00:33:04.706 "data_offset": 2048, 00:33:04.706 "data_size": 63488 00:33:04.706 } 00:33:04.706 ] 00:33:04.706 }' 00:33:04.706 12:15:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:04.706 12:15:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.335 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:33:05.335 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:05.335 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.335 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:05.607 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:05.607 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:05.607 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:33:05.865 [2024-07-21 12:15:04.561982] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:05.865 [2024-07-21 12:15:04.562310] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:05.865 [2024-07-21 12:15:04.575444] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:05.865 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:05.865 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:05.865 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:05.865 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:06.123 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:06.123 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:06.123 12:15:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:33:06.381 [2024-07-21 12:15:05.007591] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:06.381 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:06.381 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:06.381 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.381 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:33:06.639 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:33:06.639 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:06.639 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:33:06.898 [2024-07-21 12:15:05.529358] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:33:06.898 [2024-07-21 12:15:05.529543] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:33:06.898 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:06.899 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:33:07.157 BaseBdev2 00:33:07.157 12:15:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:33:07.157 12:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:33:07.157 12:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:07.157 12:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:07.157 12:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:07.157 12:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:07.157 12:15:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:07.416 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:07.674 [ 00:33:07.674 { 00:33:07.674 "name": "BaseBdev2", 00:33:07.674 "aliases": [ 00:33:07.674 "122b842d-b545-4839-a102-c7027be6908f" 00:33:07.674 ], 00:33:07.674 "product_name": "Malloc disk", 00:33:07.674 "block_size": 512, 00:33:07.674 "num_blocks": 65536, 00:33:07.674 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:07.674 "assigned_rate_limits": { 00:33:07.674 "rw_ios_per_sec": 0, 00:33:07.674 "rw_mbytes_per_sec": 0, 00:33:07.674 "r_mbytes_per_sec": 0, 00:33:07.674 "w_mbytes_per_sec": 0 00:33:07.674 }, 00:33:07.674 "claimed": false, 00:33:07.674 "zoned": false, 00:33:07.674 "supported_io_types": { 00:33:07.674 "read": true, 00:33:07.674 "write": true, 00:33:07.674 "unmap": true, 00:33:07.674 "write_zeroes": true, 00:33:07.674 "flush": true, 00:33:07.674 "reset": true, 00:33:07.674 "compare": false, 00:33:07.674 "compare_and_write": false, 00:33:07.674 "abort": true, 00:33:07.674 "nvme_admin": false, 00:33:07.674 "nvme_io": false 00:33:07.674 }, 00:33:07.674 "memory_domains": [ 00:33:07.674 { 00:33:07.674 "dma_device_id": "system", 00:33:07.674 "dma_device_type": 1 00:33:07.674 }, 00:33:07.674 { 00:33:07.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:07.674 "dma_device_type": 2 00:33:07.674 } 00:33:07.674 ], 00:33:07.674 "driver_specific": {} 00:33:07.674 } 00:33:07.674 ] 00:33:07.674 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:07.674 12:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:07.674 12:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:07.674 12:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:33:07.933 BaseBdev3 00:33:07.933 12:15:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:33:07.933 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev3 00:33:07.933 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:07.933 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:07.933 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:07.933 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:07.933 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:08.191 12:15:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:08.449 [ 00:33:08.449 { 00:33:08.449 "name": "BaseBdev3", 00:33:08.449 "aliases": [ 00:33:08.449 "c846dc28-24b1-4abb-9554-1caf5a320a89" 00:33:08.449 ], 00:33:08.449 "product_name": "Malloc disk", 00:33:08.449 "block_size": 512, 00:33:08.449 "num_blocks": 65536, 00:33:08.449 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:08.449 "assigned_rate_limits": { 00:33:08.449 "rw_ios_per_sec": 0, 00:33:08.449 "rw_mbytes_per_sec": 0, 00:33:08.449 "r_mbytes_per_sec": 0, 00:33:08.449 "w_mbytes_per_sec": 0 00:33:08.449 }, 00:33:08.449 "claimed": false, 00:33:08.449 "zoned": false, 00:33:08.449 "supported_io_types": { 00:33:08.449 "read": true, 00:33:08.449 "write": true, 00:33:08.449 "unmap": true, 00:33:08.449 "write_zeroes": true, 00:33:08.449 "flush": true, 00:33:08.449 "reset": true, 00:33:08.449 "compare": false, 00:33:08.449 "compare_and_write": false, 00:33:08.449 "abort": true, 00:33:08.449 "nvme_admin": false, 00:33:08.449 "nvme_io": false 00:33:08.449 }, 00:33:08.449 "memory_domains": [ 00:33:08.449 { 00:33:08.449 "dma_device_id": "system", 00:33:08.449 "dma_device_type": 1 00:33:08.449 }, 00:33:08.449 { 00:33:08.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:08.449 "dma_device_type": 2 00:33:08.449 } 00:33:08.449 ], 00:33:08.449 "driver_specific": {} 00:33:08.449 } 00:33:08.449 ] 00:33:08.449 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:08.449 12:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:08.449 12:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:08.449 12:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:33:08.707 BaseBdev4 00:33:08.707 12:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:33:08.707 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev4 00:33:08.707 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:08.707 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:08.707 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:08.707 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:08.707 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:08.965 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:33:08.965 [ 00:33:08.965 { 00:33:08.965 "name": "BaseBdev4", 00:33:08.965 "aliases": [ 00:33:08.965 "c6431819-c89a-4578-934c-440ed0374ac8" 00:33:08.965 ], 00:33:08.965 "product_name": "Malloc disk", 00:33:08.965 "block_size": 512, 00:33:08.965 "num_blocks": 65536, 00:33:08.965 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:08.965 "assigned_rate_limits": { 00:33:08.965 "rw_ios_per_sec": 0, 00:33:08.965 "rw_mbytes_per_sec": 0, 00:33:08.965 "r_mbytes_per_sec": 0, 00:33:08.965 "w_mbytes_per_sec": 0 00:33:08.965 }, 00:33:08.965 "claimed": false, 00:33:08.965 "zoned": false, 00:33:08.965 "supported_io_types": { 00:33:08.965 "read": true, 00:33:08.965 "write": true, 00:33:08.965 "unmap": true, 00:33:08.965 "write_zeroes": true, 00:33:08.965 "flush": true, 00:33:08.965 "reset": true, 00:33:08.965 "compare": false, 00:33:08.965 "compare_and_write": false, 00:33:08.965 "abort": true, 00:33:08.965 "nvme_admin": false, 00:33:08.965 "nvme_io": false 00:33:08.965 }, 00:33:08.965 "memory_domains": [ 00:33:08.965 { 00:33:08.965 "dma_device_id": "system", 00:33:08.965 "dma_device_type": 1 00:33:08.965 }, 00:33:08.965 { 00:33:08.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:08.965 "dma_device_type": 2 00:33:08.965 } 00:33:08.965 ], 00:33:08.965 "driver_specific": {} 00:33:08.965 } 00:33:08.965 ] 00:33:08.965 12:15:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:08.965 12:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:33:08.965 12:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:33:08.966 12:15:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:33:09.224 [2024-07-21 12:15:07.998162] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:09.224 [2024-07-21 12:15:07.998357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:09.224 [2024-07-21 12:15:07.998481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:09.224 [2024-07-21 12:15:08.000397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:09.224 [2024-07-21 12:15:08.000583] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:09.224 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:09.482 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:09.482 "name": "Existed_Raid", 00:33:09.482 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:09.482 "strip_size_kb": 64, 00:33:09.482 "state": "configuring", 00:33:09.482 "raid_level": "raid5f", 00:33:09.482 "superblock": true, 00:33:09.482 "num_base_bdevs": 4, 00:33:09.482 "num_base_bdevs_discovered": 3, 00:33:09.482 "num_base_bdevs_operational": 4, 00:33:09.482 "base_bdevs_list": [ 00:33:09.482 { 00:33:09.482 "name": "BaseBdev1", 00:33:09.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.482 "is_configured": false, 00:33:09.482 "data_offset": 0, 00:33:09.482 "data_size": 0 00:33:09.482 }, 00:33:09.482 { 00:33:09.482 "name": "BaseBdev2", 00:33:09.482 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:09.482 "is_configured": true, 00:33:09.482 "data_offset": 2048, 00:33:09.482 "data_size": 63488 00:33:09.482 }, 00:33:09.482 { 00:33:09.482 "name": "BaseBdev3", 00:33:09.482 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:09.482 "is_configured": true, 00:33:09.482 "data_offset": 2048, 00:33:09.482 "data_size": 63488 00:33:09.482 }, 00:33:09.482 { 00:33:09.482 "name": "BaseBdev4", 00:33:09.482 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:09.482 "is_configured": true, 00:33:09.482 "data_offset": 2048, 00:33:09.482 "data_size": 63488 00:33:09.482 } 00:33:09.482 ] 00:33:09.482 }' 00:33:09.482 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:09.482 12:15:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.048 12:15:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:33:10.306 [2024-07-21 12:15:09.082339] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:10.306 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:10.564 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:10.564 "name": "Existed_Raid", 00:33:10.564 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:10.564 "strip_size_kb": 64, 00:33:10.564 "state": "configuring", 00:33:10.564 "raid_level": "raid5f", 00:33:10.564 "superblock": true, 00:33:10.564 "num_base_bdevs": 4, 00:33:10.564 "num_base_bdevs_discovered": 2, 00:33:10.564 "num_base_bdevs_operational": 4, 00:33:10.564 "base_bdevs_list": [ 00:33:10.564 { 00:33:10.564 "name": "BaseBdev1", 00:33:10.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.564 "is_configured": false, 00:33:10.564 "data_offset": 0, 00:33:10.564 "data_size": 0 00:33:10.564 }, 00:33:10.564 { 00:33:10.564 "name": null, 00:33:10.564 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:10.564 "is_configured": false, 00:33:10.564 "data_offset": 2048, 00:33:10.564 "data_size": 63488 00:33:10.564 }, 00:33:10.564 { 00:33:10.564 "name": "BaseBdev3", 00:33:10.564 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:10.564 "is_configured": true, 00:33:10.564 "data_offset": 2048, 00:33:10.564 "data_size": 63488 00:33:10.564 }, 00:33:10.564 { 00:33:10.564 "name": "BaseBdev4", 00:33:10.564 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:10.564 "is_configured": true, 00:33:10.564 "data_offset": 2048, 00:33:10.564 "data_size": 63488 00:33:10.564 } 00:33:10.564 ] 00:33:10.564 }' 00:33:10.564 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:10.564 12:15:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.130 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:11.130 12:15:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:11.389 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:33:11.389 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:33:11.647 [2024-07-21 12:15:10.304670] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:11.647 BaseBdev1 00:33:11.647 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:33:11.647 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:33:11.647 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:11.647 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:11.647 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:11.647 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:11.647 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:11.904 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:12.162 [ 00:33:12.162 { 00:33:12.162 "name": "BaseBdev1", 00:33:12.162 "aliases": [ 00:33:12.162 "defe5c28-d3fb-4a45-86d8-9df655f90b0c" 00:33:12.162 ], 00:33:12.162 "product_name": "Malloc disk", 00:33:12.162 "block_size": 512, 00:33:12.162 "num_blocks": 65536, 00:33:12.162 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:12.162 "assigned_rate_limits": { 00:33:12.162 "rw_ios_per_sec": 0, 00:33:12.162 "rw_mbytes_per_sec": 0, 00:33:12.162 "r_mbytes_per_sec": 0, 00:33:12.162 "w_mbytes_per_sec": 0 00:33:12.162 }, 00:33:12.162 "claimed": true, 00:33:12.163 "claim_type": "exclusive_write", 00:33:12.163 "zoned": false, 00:33:12.163 "supported_io_types": { 00:33:12.163 "read": true, 00:33:12.163 "write": true, 00:33:12.163 "unmap": true, 00:33:12.163 "write_zeroes": true, 00:33:12.163 "flush": true, 00:33:12.163 "reset": true, 00:33:12.163 "compare": false, 00:33:12.163 "compare_and_write": false, 00:33:12.163 "abort": true, 00:33:12.163 "nvme_admin": false, 00:33:12.163 "nvme_io": false 00:33:12.163 }, 00:33:12.163 "memory_domains": [ 00:33:12.163 { 00:33:12.163 "dma_device_id": "system", 00:33:12.163 "dma_device_type": 1 00:33:12.163 }, 00:33:12.163 { 00:33:12.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:12.163 "dma_device_type": 2 00:33:12.163 } 00:33:12.163 ], 00:33:12.163 "driver_specific": {} 00:33:12.163 } 00:33:12.163 ] 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:12.163 "name": "Existed_Raid", 00:33:12.163 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:12.163 "strip_size_kb": 64, 00:33:12.163 "state": "configuring", 00:33:12.163 "raid_level": "raid5f", 00:33:12.163 "superblock": true, 00:33:12.163 "num_base_bdevs": 4, 00:33:12.163 "num_base_bdevs_discovered": 3, 00:33:12.163 "num_base_bdevs_operational": 4, 00:33:12.163 "base_bdevs_list": [ 00:33:12.163 { 00:33:12.163 "name": "BaseBdev1", 00:33:12.163 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:12.163 "is_configured": true, 00:33:12.163 "data_offset": 2048, 00:33:12.163 "data_size": 63488 00:33:12.163 }, 00:33:12.163 { 00:33:12.163 "name": null, 00:33:12.163 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:12.163 "is_configured": false, 00:33:12.163 "data_offset": 2048, 00:33:12.163 "data_size": 63488 00:33:12.163 }, 00:33:12.163 { 00:33:12.163 "name": "BaseBdev3", 00:33:12.163 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:12.163 "is_configured": true, 00:33:12.163 "data_offset": 2048, 00:33:12.163 "data_size": 63488 00:33:12.163 }, 00:33:12.163 { 00:33:12.163 "name": "BaseBdev4", 00:33:12.163 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:12.163 "is_configured": true, 00:33:12.163 "data_offset": 2048, 00:33:12.163 "data_size": 63488 00:33:12.163 } 00:33:12.163 ] 00:33:12.163 }' 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:12.163 12:15:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:13.097 12:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.097 12:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:13.097 12:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:33:13.097 12:15:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:33:13.355 [2024-07-21 12:15:12.053010] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:13.355 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:13.613 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:13.613 "name": "Existed_Raid", 00:33:13.613 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:13.613 "strip_size_kb": 64, 00:33:13.613 "state": "configuring", 00:33:13.613 "raid_level": "raid5f", 00:33:13.613 "superblock": true, 00:33:13.613 "num_base_bdevs": 4, 00:33:13.613 "num_base_bdevs_discovered": 2, 00:33:13.613 "num_base_bdevs_operational": 4, 00:33:13.613 "base_bdevs_list": [ 00:33:13.613 { 00:33:13.613 "name": "BaseBdev1", 00:33:13.613 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:13.613 "is_configured": true, 00:33:13.613 "data_offset": 2048, 00:33:13.613 "data_size": 63488 00:33:13.613 }, 00:33:13.613 { 00:33:13.613 "name": null, 00:33:13.613 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:13.613 "is_configured": false, 00:33:13.613 "data_offset": 2048, 00:33:13.613 "data_size": 63488 00:33:13.613 }, 00:33:13.613 { 00:33:13.613 "name": null, 00:33:13.613 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:13.613 "is_configured": false, 00:33:13.613 "data_offset": 2048, 00:33:13.613 "data_size": 63488 00:33:13.613 }, 00:33:13.613 { 00:33:13.613 "name": "BaseBdev4", 00:33:13.613 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:13.613 "is_configured": true, 00:33:13.613 "data_offset": 2048, 00:33:13.613 "data_size": 63488 00:33:13.613 } 00:33:13.613 ] 00:33:13.613 }' 00:33:13.613 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:13.613 12:15:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.178 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.178 12:15:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:14.434 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:33:14.434 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:14.692 [2024-07-21 12:15:13.501351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:14.692 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:14.950 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:14.950 "name": "Existed_Raid", 00:33:14.950 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:14.950 "strip_size_kb": 64, 00:33:14.950 "state": "configuring", 00:33:14.950 "raid_level": "raid5f", 00:33:14.950 "superblock": true, 00:33:14.950 "num_base_bdevs": 4, 00:33:14.950 "num_base_bdevs_discovered": 3, 00:33:14.950 "num_base_bdevs_operational": 4, 00:33:14.950 "base_bdevs_list": [ 00:33:14.950 { 00:33:14.950 "name": "BaseBdev1", 00:33:14.950 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:14.950 "is_configured": true, 00:33:14.950 "data_offset": 2048, 00:33:14.950 "data_size": 63488 00:33:14.950 }, 00:33:14.950 { 00:33:14.950 "name": null, 00:33:14.950 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:14.950 "is_configured": false, 00:33:14.950 "data_offset": 2048, 00:33:14.950 "data_size": 63488 00:33:14.950 }, 00:33:14.950 { 00:33:14.950 "name": "BaseBdev3", 00:33:14.950 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:14.950 "is_configured": true, 00:33:14.950 "data_offset": 2048, 00:33:14.950 "data_size": 63488 00:33:14.950 }, 00:33:14.950 { 00:33:14.950 "name": "BaseBdev4", 00:33:14.950 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:14.950 "is_configured": true, 00:33:14.950 "data_offset": 2048, 00:33:14.950 "data_size": 63488 00:33:14.950 } 00:33:14.950 ] 00:33:14.950 }' 00:33:14.950 12:15:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:14.950 12:15:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:15.882 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:15.882 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:15.882 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:33:15.882 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:33:16.140 [2024-07-21 12:15:14.845778] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.140 12:15:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:16.398 12:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:16.398 "name": "Existed_Raid", 00:33:16.398 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:16.398 "strip_size_kb": 64, 00:33:16.398 "state": "configuring", 00:33:16.398 "raid_level": "raid5f", 00:33:16.398 "superblock": true, 00:33:16.398 "num_base_bdevs": 4, 00:33:16.398 "num_base_bdevs_discovered": 2, 00:33:16.398 "num_base_bdevs_operational": 4, 00:33:16.398 "base_bdevs_list": [ 00:33:16.398 { 00:33:16.398 "name": null, 00:33:16.398 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:16.398 "is_configured": false, 00:33:16.398 "data_offset": 2048, 00:33:16.398 "data_size": 63488 00:33:16.398 }, 00:33:16.398 { 00:33:16.398 "name": null, 00:33:16.398 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:16.398 "is_configured": false, 00:33:16.398 "data_offset": 2048, 00:33:16.398 "data_size": 63488 00:33:16.398 }, 00:33:16.398 { 00:33:16.398 "name": "BaseBdev3", 00:33:16.398 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:16.398 "is_configured": true, 00:33:16.398 "data_offset": 2048, 00:33:16.398 "data_size": 63488 00:33:16.398 }, 00:33:16.398 { 00:33:16.398 "name": "BaseBdev4", 00:33:16.398 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:16.398 "is_configured": true, 00:33:16.398 "data_offset": 2048, 00:33:16.398 "data_size": 63488 00:33:16.398 } 00:33:16.398 ] 00:33:16.398 }' 00:33:16.398 12:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:16.398 12:15:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:16.963 12:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:16.963 12:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:17.222 12:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:33:17.222 12:15:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:17.480 [2024-07-21 12:15:16.207536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:17.480 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:33:17.480 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:17.480 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:17.480 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:17.481 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:17.739 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:17.739 "name": "Existed_Raid", 00:33:17.739 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:17.739 "strip_size_kb": 64, 00:33:17.739 "state": "configuring", 00:33:17.739 "raid_level": "raid5f", 00:33:17.739 "superblock": true, 00:33:17.739 "num_base_bdevs": 4, 00:33:17.739 "num_base_bdevs_discovered": 3, 00:33:17.739 "num_base_bdevs_operational": 4, 00:33:17.739 "base_bdevs_list": [ 00:33:17.739 { 00:33:17.739 "name": null, 00:33:17.739 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:17.739 "is_configured": false, 00:33:17.739 "data_offset": 2048, 00:33:17.739 "data_size": 63488 00:33:17.739 }, 00:33:17.739 { 00:33:17.739 "name": "BaseBdev2", 00:33:17.739 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:17.739 "is_configured": true, 00:33:17.739 "data_offset": 2048, 00:33:17.739 "data_size": 63488 00:33:17.739 }, 00:33:17.739 { 00:33:17.739 "name": "BaseBdev3", 00:33:17.739 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:17.739 "is_configured": true, 00:33:17.739 "data_offset": 2048, 00:33:17.739 "data_size": 63488 00:33:17.739 }, 00:33:17.739 { 00:33:17.739 "name": "BaseBdev4", 00:33:17.739 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:17.739 "is_configured": true, 00:33:17.739 "data_offset": 2048, 00:33:17.739 "data_size": 63488 00:33:17.739 } 00:33:17.739 ] 00:33:17.739 }' 00:33:17.739 12:15:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:17.739 12:15:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:18.304 12:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.304 12:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:18.562 12:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:33:18.562 12:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:18.562 12:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:18.819 12:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u defe5c28-d3fb-4a45-86d8-9df655f90b0c 00:33:19.077 [2024-07-21 12:15:17.901469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:19.077 [2024-07-21 12:15:17.902230] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:33:19.077 [2024-07-21 12:15:17.902350] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:19.077 [2024-07-21 12:15:17.902488] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:19.077 NewBaseBdev 00:33:19.077 [2024-07-21 12:15:17.903262] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:33:19.077 [2024-07-21 12:15:17.903428] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000009080 00:33:19.077 [2024-07-21 12:15:17.903623] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:19.077 12:15:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:33:19.077 12:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@895 -- # local bdev_name=NewBaseBdev 00:33:19.077 12:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:33:19.077 12:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@897 -- # local i 00:33:19.078 12:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:33:19.078 12:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:33:19.078 12:15:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:33:19.335 12:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:19.593 [ 00:33:19.593 { 00:33:19.593 "name": "NewBaseBdev", 00:33:19.593 "aliases": [ 00:33:19.593 "defe5c28-d3fb-4a45-86d8-9df655f90b0c" 00:33:19.593 ], 00:33:19.593 "product_name": "Malloc disk", 00:33:19.593 "block_size": 512, 00:33:19.593 "num_blocks": 65536, 00:33:19.593 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:19.593 "assigned_rate_limits": { 00:33:19.593 "rw_ios_per_sec": 0, 00:33:19.593 "rw_mbytes_per_sec": 0, 00:33:19.593 "r_mbytes_per_sec": 0, 00:33:19.593 "w_mbytes_per_sec": 0 00:33:19.593 }, 00:33:19.593 "claimed": true, 00:33:19.593 "claim_type": "exclusive_write", 00:33:19.593 "zoned": false, 00:33:19.593 "supported_io_types": { 00:33:19.593 "read": true, 00:33:19.593 "write": true, 00:33:19.593 "unmap": true, 00:33:19.593 "write_zeroes": true, 00:33:19.593 "flush": true, 00:33:19.593 "reset": true, 00:33:19.593 "compare": false, 00:33:19.593 "compare_and_write": false, 00:33:19.593 "abort": true, 00:33:19.593 "nvme_admin": false, 00:33:19.593 "nvme_io": false 00:33:19.593 }, 00:33:19.593 "memory_domains": [ 00:33:19.593 { 00:33:19.593 "dma_device_id": "system", 00:33:19.593 "dma_device_type": 1 00:33:19.593 }, 00:33:19.593 { 00:33:19.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:19.593 "dma_device_type": 2 00:33:19.593 } 00:33:19.593 ], 00:33:19.593 "driver_specific": {} 00:33:19.593 } 00:33:19.593 ] 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # return 0 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:19.593 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:19.864 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:19.864 "name": "Existed_Raid", 00:33:19.864 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:19.864 "strip_size_kb": 64, 00:33:19.864 "state": "online", 00:33:19.864 "raid_level": "raid5f", 00:33:19.864 "superblock": true, 00:33:19.864 "num_base_bdevs": 4, 00:33:19.864 "num_base_bdevs_discovered": 4, 00:33:19.864 "num_base_bdevs_operational": 4, 00:33:19.864 "base_bdevs_list": [ 00:33:19.864 { 00:33:19.864 "name": "NewBaseBdev", 00:33:19.864 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:19.864 "is_configured": true, 00:33:19.864 "data_offset": 2048, 00:33:19.864 "data_size": 63488 00:33:19.864 }, 00:33:19.864 { 00:33:19.864 "name": "BaseBdev2", 00:33:19.864 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:19.864 "is_configured": true, 00:33:19.864 "data_offset": 2048, 00:33:19.864 "data_size": 63488 00:33:19.864 }, 00:33:19.864 { 00:33:19.864 "name": "BaseBdev3", 00:33:19.864 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:19.864 "is_configured": true, 00:33:19.864 "data_offset": 2048, 00:33:19.864 "data_size": 63488 00:33:19.864 }, 00:33:19.864 { 00:33:19.864 "name": "BaseBdev4", 00:33:19.864 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:19.864 "is_configured": true, 00:33:19.864 "data_offset": 2048, 00:33:19.864 "data_size": 63488 00:33:19.864 } 00:33:19.864 ] 00:33:19.864 }' 00:33:19.864 12:15:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:19.864 12:15:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:33:20.429 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:20.687 [2024-07-21 12:15:19.486319] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:20.687 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:20.687 "name": "Existed_Raid", 00:33:20.687 "aliases": [ 00:33:20.687 "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d" 00:33:20.687 ], 00:33:20.687 "product_name": "Raid Volume", 00:33:20.687 "block_size": 512, 00:33:20.687 "num_blocks": 190464, 00:33:20.687 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:20.687 "assigned_rate_limits": { 00:33:20.687 "rw_ios_per_sec": 0, 00:33:20.687 "rw_mbytes_per_sec": 0, 00:33:20.687 "r_mbytes_per_sec": 0, 00:33:20.687 "w_mbytes_per_sec": 0 00:33:20.687 }, 00:33:20.687 "claimed": false, 00:33:20.687 "zoned": false, 00:33:20.687 "supported_io_types": { 00:33:20.687 "read": true, 00:33:20.687 "write": true, 00:33:20.687 "unmap": false, 00:33:20.687 "write_zeroes": true, 00:33:20.687 "flush": false, 00:33:20.687 "reset": true, 00:33:20.687 "compare": false, 00:33:20.687 "compare_and_write": false, 00:33:20.687 "abort": false, 00:33:20.687 "nvme_admin": false, 00:33:20.687 "nvme_io": false 00:33:20.687 }, 00:33:20.687 "driver_specific": { 00:33:20.687 "raid": { 00:33:20.687 "uuid": "d8c0c4eb-7964-4bd6-8733-d5ddb6fffd2d", 00:33:20.687 "strip_size_kb": 64, 00:33:20.687 "state": "online", 00:33:20.687 "raid_level": "raid5f", 00:33:20.687 "superblock": true, 00:33:20.687 "num_base_bdevs": 4, 00:33:20.687 "num_base_bdevs_discovered": 4, 00:33:20.687 "num_base_bdevs_operational": 4, 00:33:20.687 "base_bdevs_list": [ 00:33:20.687 { 00:33:20.687 "name": "NewBaseBdev", 00:33:20.687 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:20.687 "is_configured": true, 00:33:20.687 "data_offset": 2048, 00:33:20.687 "data_size": 63488 00:33:20.687 }, 00:33:20.687 { 00:33:20.687 "name": "BaseBdev2", 00:33:20.687 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:20.687 "is_configured": true, 00:33:20.687 "data_offset": 2048, 00:33:20.687 "data_size": 63488 00:33:20.687 }, 00:33:20.687 { 00:33:20.687 "name": "BaseBdev3", 00:33:20.687 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:20.687 "is_configured": true, 00:33:20.687 "data_offset": 2048, 00:33:20.687 "data_size": 63488 00:33:20.687 }, 00:33:20.687 { 00:33:20.687 "name": "BaseBdev4", 00:33:20.687 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:20.687 "is_configured": true, 00:33:20.687 "data_offset": 2048, 00:33:20.687 "data_size": 63488 00:33:20.687 } 00:33:20.687 ] 00:33:20.687 } 00:33:20.687 } 00:33:20.687 }' 00:33:20.687 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:20.946 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:33:20.946 BaseBdev2 00:33:20.946 BaseBdev3 00:33:20.946 BaseBdev4' 00:33:20.946 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:20.946 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:33:20.946 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:20.946 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:20.946 "name": "NewBaseBdev", 00:33:20.946 "aliases": [ 00:33:20.946 "defe5c28-d3fb-4a45-86d8-9df655f90b0c" 00:33:20.946 ], 00:33:20.946 "product_name": "Malloc disk", 00:33:20.946 "block_size": 512, 00:33:20.946 "num_blocks": 65536, 00:33:20.946 "uuid": "defe5c28-d3fb-4a45-86d8-9df655f90b0c", 00:33:20.946 "assigned_rate_limits": { 00:33:20.946 "rw_ios_per_sec": 0, 00:33:20.946 "rw_mbytes_per_sec": 0, 00:33:20.946 "r_mbytes_per_sec": 0, 00:33:20.946 "w_mbytes_per_sec": 0 00:33:20.946 }, 00:33:20.946 "claimed": true, 00:33:20.946 "claim_type": "exclusive_write", 00:33:20.946 "zoned": false, 00:33:20.946 "supported_io_types": { 00:33:20.946 "read": true, 00:33:20.946 "write": true, 00:33:20.946 "unmap": true, 00:33:20.946 "write_zeroes": true, 00:33:20.946 "flush": true, 00:33:20.946 "reset": true, 00:33:20.946 "compare": false, 00:33:20.946 "compare_and_write": false, 00:33:20.946 "abort": true, 00:33:20.946 "nvme_admin": false, 00:33:20.946 "nvme_io": false 00:33:20.946 }, 00:33:20.946 "memory_domains": [ 00:33:20.946 { 00:33:20.946 "dma_device_id": "system", 00:33:20.946 "dma_device_type": 1 00:33:20.946 }, 00:33:20.946 { 00:33:20.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:20.946 "dma_device_type": 2 00:33:20.946 } 00:33:20.946 ], 00:33:20.946 "driver_specific": {} 00:33:20.946 }' 00:33:20.946 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:20.946 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:21.204 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:21.204 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:21.204 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:21.204 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:21.204 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:21.204 12:15:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:21.204 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:21.204 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:21.461 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:21.461 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:21.461 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:21.461 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:33:21.461 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:21.718 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:21.718 "name": "BaseBdev2", 00:33:21.718 "aliases": [ 00:33:21.718 "122b842d-b545-4839-a102-c7027be6908f" 00:33:21.718 ], 00:33:21.718 "product_name": "Malloc disk", 00:33:21.718 "block_size": 512, 00:33:21.718 "num_blocks": 65536, 00:33:21.718 "uuid": "122b842d-b545-4839-a102-c7027be6908f", 00:33:21.718 "assigned_rate_limits": { 00:33:21.718 "rw_ios_per_sec": 0, 00:33:21.718 "rw_mbytes_per_sec": 0, 00:33:21.718 "r_mbytes_per_sec": 0, 00:33:21.718 "w_mbytes_per_sec": 0 00:33:21.718 }, 00:33:21.718 "claimed": true, 00:33:21.718 "claim_type": "exclusive_write", 00:33:21.718 "zoned": false, 00:33:21.718 "supported_io_types": { 00:33:21.718 "read": true, 00:33:21.718 "write": true, 00:33:21.718 "unmap": true, 00:33:21.718 "write_zeroes": true, 00:33:21.718 "flush": true, 00:33:21.718 "reset": true, 00:33:21.718 "compare": false, 00:33:21.718 "compare_and_write": false, 00:33:21.718 "abort": true, 00:33:21.718 "nvme_admin": false, 00:33:21.718 "nvme_io": false 00:33:21.718 }, 00:33:21.718 "memory_domains": [ 00:33:21.718 { 00:33:21.718 "dma_device_id": "system", 00:33:21.718 "dma_device_type": 1 00:33:21.718 }, 00:33:21.718 { 00:33:21.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:21.718 "dma_device_type": 2 00:33:21.718 } 00:33:21.718 ], 00:33:21.718 "driver_specific": {} 00:33:21.718 }' 00:33:21.718 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:21.718 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:21.718 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:21.718 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:21.718 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:33:21.981 12:15:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:22.239 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:22.239 "name": "BaseBdev3", 00:33:22.239 "aliases": [ 00:33:22.239 "c846dc28-24b1-4abb-9554-1caf5a320a89" 00:33:22.239 ], 00:33:22.239 "product_name": "Malloc disk", 00:33:22.239 "block_size": 512, 00:33:22.239 "num_blocks": 65536, 00:33:22.239 "uuid": "c846dc28-24b1-4abb-9554-1caf5a320a89", 00:33:22.239 "assigned_rate_limits": { 00:33:22.239 "rw_ios_per_sec": 0, 00:33:22.239 "rw_mbytes_per_sec": 0, 00:33:22.239 "r_mbytes_per_sec": 0, 00:33:22.239 "w_mbytes_per_sec": 0 00:33:22.239 }, 00:33:22.239 "claimed": true, 00:33:22.239 "claim_type": "exclusive_write", 00:33:22.239 "zoned": false, 00:33:22.239 "supported_io_types": { 00:33:22.239 "read": true, 00:33:22.239 "write": true, 00:33:22.239 "unmap": true, 00:33:22.239 "write_zeroes": true, 00:33:22.239 "flush": true, 00:33:22.239 "reset": true, 00:33:22.239 "compare": false, 00:33:22.239 "compare_and_write": false, 00:33:22.239 "abort": true, 00:33:22.239 "nvme_admin": false, 00:33:22.239 "nvme_io": false 00:33:22.239 }, 00:33:22.239 "memory_domains": [ 00:33:22.239 { 00:33:22.239 "dma_device_id": "system", 00:33:22.239 "dma_device_type": 1 00:33:22.239 }, 00:33:22.239 { 00:33:22.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:22.239 "dma_device_type": 2 00:33:22.239 } 00:33:22.239 ], 00:33:22.239 "driver_specific": {} 00:33:22.239 }' 00:33:22.239 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:22.498 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:22.756 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:22.756 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:22.756 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:22.756 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:33:22.756 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:23.014 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:23.014 "name": "BaseBdev4", 00:33:23.014 "aliases": [ 00:33:23.014 "c6431819-c89a-4578-934c-440ed0374ac8" 00:33:23.014 ], 00:33:23.014 "product_name": "Malloc disk", 00:33:23.014 "block_size": 512, 00:33:23.014 "num_blocks": 65536, 00:33:23.014 "uuid": "c6431819-c89a-4578-934c-440ed0374ac8", 00:33:23.014 "assigned_rate_limits": { 00:33:23.014 "rw_ios_per_sec": 0, 00:33:23.014 "rw_mbytes_per_sec": 0, 00:33:23.014 "r_mbytes_per_sec": 0, 00:33:23.014 "w_mbytes_per_sec": 0 00:33:23.014 }, 00:33:23.014 "claimed": true, 00:33:23.014 "claim_type": "exclusive_write", 00:33:23.014 "zoned": false, 00:33:23.015 "supported_io_types": { 00:33:23.015 "read": true, 00:33:23.015 "write": true, 00:33:23.015 "unmap": true, 00:33:23.015 "write_zeroes": true, 00:33:23.015 "flush": true, 00:33:23.015 "reset": true, 00:33:23.015 "compare": false, 00:33:23.015 "compare_and_write": false, 00:33:23.015 "abort": true, 00:33:23.015 "nvme_admin": false, 00:33:23.015 "nvme_io": false 00:33:23.015 }, 00:33:23.015 "memory_domains": [ 00:33:23.015 { 00:33:23.015 "dma_device_id": "system", 00:33:23.015 "dma_device_type": 1 00:33:23.015 }, 00:33:23.015 { 00:33:23.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:23.015 "dma_device_type": 2 00:33:23.015 } 00:33:23.015 ], 00:33:23.015 "driver_specific": {} 00:33:23.015 }' 00:33:23.015 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:23.015 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:23.015 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:23.015 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:23.273 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:23.273 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:23.273 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:23.273 12:15:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:23.273 12:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:23.273 12:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:23.273 12:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:23.273 12:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:23.273 12:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:33:23.532 [2024-07-21 12:15:22.386787] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:23.532 [2024-07-21 12:15:22.386995] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:23.532 [2024-07-21 12:15:22.387164] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:23.532 [2024-07-21 12:15:22.387621] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:23.532 [2024-07-21 12:15:22.387745] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name Existed_Raid, state offline 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 165410 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@946 -- # '[' -z 165410 ']' 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # kill -0 165410 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # uname 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 165410 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 165410' 00:33:23.791 killing process with pid 165410 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@965 -- # kill 165410 00:33:23.791 [2024-07-21 12:15:22.433171] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:23.791 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # wait 165410 00:33:23.791 [2024-07-21 12:15:22.477871] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:24.050 ************************************ 00:33:24.050 END TEST raid5f_state_function_test_sb 00:33:24.050 ************************************ 00:33:24.050 12:15:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:33:24.050 00:33:24.050 real 0m32.562s 00:33:24.050 user 1m1.866s 00:33:24.050 sys 0m3.928s 00:33:24.050 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:24.050 12:15:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:24.050 12:15:22 bdev_raid -- bdev/bdev_raid.sh@888 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:33:24.050 12:15:22 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:33:24.050 12:15:22 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:24.050 12:15:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:24.050 ************************************ 00:33:24.050 START TEST raid5f_superblock_test 00:33:24.050 ************************************ 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1121 -- # raid_superblock_test raid5f 4 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid5f 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid5f '!=' raid1 ']' 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=166485 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 166485 /var/tmp/spdk-raid.sock 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@827 -- # '[' -z 166485 ']' 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:24.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:24.050 12:15:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:24.050 [2024-07-21 12:15:22.891541] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:24.050 [2024-07-21 12:15:22.892001] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid166485 ] 00:33:24.307 [2024-07-21 12:15:23.060485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.308 [2024-07-21 12:15:23.121502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:24.308 [2024-07-21 12:15:23.173756] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # return 0 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.241 12:15:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:33:25.499 malloc1 00:33:25.499 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:25.758 [2024-07-21 12:15:24.391694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:25.758 [2024-07-21 12:15:24.391944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:25.758 [2024-07-21 12:15:24.392156] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:33:25.758 [2024-07-21 12:15:24.392317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:25.758 [2024-07-21 12:15:24.394844] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:25.758 [2024-07-21 12:15:24.395063] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:25.758 pt1 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:33:25.758 malloc2 00:33:25.758 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:26.016 [2024-07-21 12:15:24.797769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:26.017 [2024-07-21 12:15:24.797970] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.017 [2024-07-21 12:15:24.798064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:26.017 [2024-07-21 12:15:24.798322] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.017 [2024-07-21 12:15:24.803940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.017 [2024-07-21 12:15:24.804303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:26.017 pt2 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:26.017 12:15:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:33:26.276 malloc3 00:33:26.276 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:26.534 [2024-07-21 12:15:25.344864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:26.534 [2024-07-21 12:15:25.345123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:26.534 [2024-07-21 12:15:25.345231] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:33:26.534 [2024-07-21 12:15:25.345510] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:26.534 [2024-07-21 12:15:25.347720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:26.534 [2024-07-21 12:15:25.347897] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:26.534 pt3 00:33:26.534 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:26.534 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:26.534 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:33:26.534 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:33:26.534 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:33:26.535 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:33:26.535 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:33:26.535 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:33:26.535 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:33:26.793 malloc4 00:33:26.793 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:27.052 [2024-07-21 12:15:25.786448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:27.052 [2024-07-21 12:15:25.786859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:27.052 [2024-07-21 12:15:25.787051] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:33:27.052 [2024-07-21 12:15:25.787204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:27.052 [2024-07-21 12:15:25.789568] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:27.052 [2024-07-21 12:15:25.789740] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:27.052 pt4 00:33:27.052 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:33:27.052 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:33:27.052 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:33:27.310 [2024-07-21 12:15:25.990671] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:27.310 [2024-07-21 12:15:25.992762] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:27.310 [2024-07-21 12:15:25.992961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:27.310 [2024-07-21 12:15:25.993062] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:27.310 [2024-07-21 12:15:25.993475] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009380 00:33:27.310 [2024-07-21 12:15:25.993583] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:27.310 [2024-07-21 12:15:25.993795] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:33:27.310 [2024-07-21 12:15:25.994658] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009380 00:33:27.310 [2024-07-21 12:15:25.994810] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009380 00:33:27.311 [2024-07-21 12:15:25.995159] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:27.311 12:15:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:27.311 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:27.569 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:27.569 "name": "raid_bdev1", 00:33:27.569 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:27.569 "strip_size_kb": 64, 00:33:27.569 "state": "online", 00:33:27.569 "raid_level": "raid5f", 00:33:27.569 "superblock": true, 00:33:27.569 "num_base_bdevs": 4, 00:33:27.569 "num_base_bdevs_discovered": 4, 00:33:27.569 "num_base_bdevs_operational": 4, 00:33:27.569 "base_bdevs_list": [ 00:33:27.569 { 00:33:27.569 "name": "pt1", 00:33:27.569 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:27.569 "is_configured": true, 00:33:27.569 "data_offset": 2048, 00:33:27.569 "data_size": 63488 00:33:27.569 }, 00:33:27.569 { 00:33:27.569 "name": "pt2", 00:33:27.569 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:27.569 "is_configured": true, 00:33:27.569 "data_offset": 2048, 00:33:27.569 "data_size": 63488 00:33:27.569 }, 00:33:27.569 { 00:33:27.569 "name": "pt3", 00:33:27.569 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:27.569 "is_configured": true, 00:33:27.569 "data_offset": 2048, 00:33:27.569 "data_size": 63488 00:33:27.569 }, 00:33:27.569 { 00:33:27.569 "name": "pt4", 00:33:27.569 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:27.569 "is_configured": true, 00:33:27.569 "data_offset": 2048, 00:33:27.569 "data_size": 63488 00:33:27.569 } 00:33:27.569 ] 00:33:27.569 }' 00:33:27.569 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:27.569 12:15:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:28.160 12:15:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:28.426 [2024-07-21 12:15:27.127463] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:28.426 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:28.426 "name": "raid_bdev1", 00:33:28.426 "aliases": [ 00:33:28.426 "2d666eea-5dbd-4ac2-a560-41b54b642405" 00:33:28.426 ], 00:33:28.426 "product_name": "Raid Volume", 00:33:28.426 "block_size": 512, 00:33:28.426 "num_blocks": 190464, 00:33:28.426 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:28.426 "assigned_rate_limits": { 00:33:28.426 "rw_ios_per_sec": 0, 00:33:28.426 "rw_mbytes_per_sec": 0, 00:33:28.426 "r_mbytes_per_sec": 0, 00:33:28.426 "w_mbytes_per_sec": 0 00:33:28.426 }, 00:33:28.426 "claimed": false, 00:33:28.426 "zoned": false, 00:33:28.426 "supported_io_types": { 00:33:28.426 "read": true, 00:33:28.426 "write": true, 00:33:28.426 "unmap": false, 00:33:28.426 "write_zeroes": true, 00:33:28.426 "flush": false, 00:33:28.426 "reset": true, 00:33:28.426 "compare": false, 00:33:28.427 "compare_and_write": false, 00:33:28.427 "abort": false, 00:33:28.427 "nvme_admin": false, 00:33:28.427 "nvme_io": false 00:33:28.427 }, 00:33:28.427 "driver_specific": { 00:33:28.427 "raid": { 00:33:28.427 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:28.427 "strip_size_kb": 64, 00:33:28.427 "state": "online", 00:33:28.427 "raid_level": "raid5f", 00:33:28.427 "superblock": true, 00:33:28.427 "num_base_bdevs": 4, 00:33:28.427 "num_base_bdevs_discovered": 4, 00:33:28.427 "num_base_bdevs_operational": 4, 00:33:28.427 "base_bdevs_list": [ 00:33:28.427 { 00:33:28.427 "name": "pt1", 00:33:28.427 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:28.427 "is_configured": true, 00:33:28.427 "data_offset": 2048, 00:33:28.427 "data_size": 63488 00:33:28.427 }, 00:33:28.427 { 00:33:28.427 "name": "pt2", 00:33:28.427 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:28.427 "is_configured": true, 00:33:28.427 "data_offset": 2048, 00:33:28.427 "data_size": 63488 00:33:28.427 }, 00:33:28.427 { 00:33:28.427 "name": "pt3", 00:33:28.427 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:28.427 "is_configured": true, 00:33:28.427 "data_offset": 2048, 00:33:28.427 "data_size": 63488 00:33:28.427 }, 00:33:28.427 { 00:33:28.427 "name": "pt4", 00:33:28.427 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:28.427 "is_configured": true, 00:33:28.427 "data_offset": 2048, 00:33:28.427 "data_size": 63488 00:33:28.427 } 00:33:28.427 ] 00:33:28.427 } 00:33:28.427 } 00:33:28.427 }' 00:33:28.427 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:28.427 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:28.427 pt2 00:33:28.427 pt3 00:33:28.427 pt4' 00:33:28.427 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:28.427 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:28.427 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:28.685 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:28.685 "name": "pt1", 00:33:28.685 "aliases": [ 00:33:28.685 "6178f4c5-402c-5cc7-8582-1918a6450662" 00:33:28.685 ], 00:33:28.685 "product_name": "passthru", 00:33:28.685 "block_size": 512, 00:33:28.685 "num_blocks": 65536, 00:33:28.685 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:28.685 "assigned_rate_limits": { 00:33:28.685 "rw_ios_per_sec": 0, 00:33:28.685 "rw_mbytes_per_sec": 0, 00:33:28.685 "r_mbytes_per_sec": 0, 00:33:28.685 "w_mbytes_per_sec": 0 00:33:28.685 }, 00:33:28.685 "claimed": true, 00:33:28.685 "claim_type": "exclusive_write", 00:33:28.685 "zoned": false, 00:33:28.685 "supported_io_types": { 00:33:28.685 "read": true, 00:33:28.685 "write": true, 00:33:28.685 "unmap": true, 00:33:28.685 "write_zeroes": true, 00:33:28.685 "flush": true, 00:33:28.685 "reset": true, 00:33:28.685 "compare": false, 00:33:28.685 "compare_and_write": false, 00:33:28.685 "abort": true, 00:33:28.685 "nvme_admin": false, 00:33:28.685 "nvme_io": false 00:33:28.685 }, 00:33:28.685 "memory_domains": [ 00:33:28.685 { 00:33:28.685 "dma_device_id": "system", 00:33:28.685 "dma_device_type": 1 00:33:28.685 }, 00:33:28.685 { 00:33:28.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:28.685 "dma_device_type": 2 00:33:28.685 } 00:33:28.685 ], 00:33:28.685 "driver_specific": { 00:33:28.685 "passthru": { 00:33:28.685 "name": "pt1", 00:33:28.685 "base_bdev_name": "malloc1" 00:33:28.685 } 00:33:28.685 } 00:33:28.685 }' 00:33:28.685 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.685 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:28.685 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:28.685 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:28.944 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:29.203 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:29.203 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:29.203 12:15:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:29.203 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:29.203 "name": "pt2", 00:33:29.203 "aliases": [ 00:33:29.203 "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac" 00:33:29.203 ], 00:33:29.203 "product_name": "passthru", 00:33:29.203 "block_size": 512, 00:33:29.203 "num_blocks": 65536, 00:33:29.203 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:29.203 "assigned_rate_limits": { 00:33:29.203 "rw_ios_per_sec": 0, 00:33:29.203 "rw_mbytes_per_sec": 0, 00:33:29.203 "r_mbytes_per_sec": 0, 00:33:29.203 "w_mbytes_per_sec": 0 00:33:29.203 }, 00:33:29.203 "claimed": true, 00:33:29.203 "claim_type": "exclusive_write", 00:33:29.203 "zoned": false, 00:33:29.203 "supported_io_types": { 00:33:29.203 "read": true, 00:33:29.203 "write": true, 00:33:29.203 "unmap": true, 00:33:29.203 "write_zeroes": true, 00:33:29.203 "flush": true, 00:33:29.203 "reset": true, 00:33:29.203 "compare": false, 00:33:29.203 "compare_and_write": false, 00:33:29.203 "abort": true, 00:33:29.203 "nvme_admin": false, 00:33:29.203 "nvme_io": false 00:33:29.203 }, 00:33:29.203 "memory_domains": [ 00:33:29.203 { 00:33:29.203 "dma_device_id": "system", 00:33:29.203 "dma_device_type": 1 00:33:29.203 }, 00:33:29.203 { 00:33:29.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:29.203 "dma_device_type": 2 00:33:29.203 } 00:33:29.203 ], 00:33:29.203 "driver_specific": { 00:33:29.203 "passthru": { 00:33:29.203 "name": "pt2", 00:33:29.203 "base_bdev_name": "malloc2" 00:33:29.203 } 00:33:29.203 } 00:33:29.203 }' 00:33:29.203 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.203 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:29.462 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:29.721 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:29.721 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:29.721 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:29.721 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:29.721 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:29.979 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:29.980 "name": "pt3", 00:33:29.980 "aliases": [ 00:33:29.980 "d989c8bf-ef76-537f-b124-0e4e13640b74" 00:33:29.980 ], 00:33:29.980 "product_name": "passthru", 00:33:29.980 "block_size": 512, 00:33:29.980 "num_blocks": 65536, 00:33:29.980 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:29.980 "assigned_rate_limits": { 00:33:29.980 "rw_ios_per_sec": 0, 00:33:29.980 "rw_mbytes_per_sec": 0, 00:33:29.980 "r_mbytes_per_sec": 0, 00:33:29.980 "w_mbytes_per_sec": 0 00:33:29.980 }, 00:33:29.980 "claimed": true, 00:33:29.980 "claim_type": "exclusive_write", 00:33:29.980 "zoned": false, 00:33:29.980 "supported_io_types": { 00:33:29.980 "read": true, 00:33:29.980 "write": true, 00:33:29.980 "unmap": true, 00:33:29.980 "write_zeroes": true, 00:33:29.980 "flush": true, 00:33:29.980 "reset": true, 00:33:29.980 "compare": false, 00:33:29.980 "compare_and_write": false, 00:33:29.980 "abort": true, 00:33:29.980 "nvme_admin": false, 00:33:29.980 "nvme_io": false 00:33:29.980 }, 00:33:29.980 "memory_domains": [ 00:33:29.980 { 00:33:29.980 "dma_device_id": "system", 00:33:29.980 "dma_device_type": 1 00:33:29.980 }, 00:33:29.980 { 00:33:29.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:29.980 "dma_device_type": 2 00:33:29.980 } 00:33:29.980 ], 00:33:29.980 "driver_specific": { 00:33:29.980 "passthru": { 00:33:29.980 "name": "pt3", 00:33:29.980 "base_bdev_name": "malloc3" 00:33:29.980 } 00:33:29.980 } 00:33:29.980 }' 00:33:29.980 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.980 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:29.980 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:29.980 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:29.980 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:30.238 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:30.238 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:30.238 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:30.238 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:30.238 12:15:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:30.238 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:30.238 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:30.238 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:30.238 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:33:30.238 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:30.496 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:30.496 "name": "pt4", 00:33:30.496 "aliases": [ 00:33:30.496 "70c3917d-a850-5147-9a19-1d1c02bc1f8e" 00:33:30.496 ], 00:33:30.496 "product_name": "passthru", 00:33:30.496 "block_size": 512, 00:33:30.496 "num_blocks": 65536, 00:33:30.496 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:30.496 "assigned_rate_limits": { 00:33:30.496 "rw_ios_per_sec": 0, 00:33:30.496 "rw_mbytes_per_sec": 0, 00:33:30.496 "r_mbytes_per_sec": 0, 00:33:30.496 "w_mbytes_per_sec": 0 00:33:30.496 }, 00:33:30.496 "claimed": true, 00:33:30.496 "claim_type": "exclusive_write", 00:33:30.496 "zoned": false, 00:33:30.496 "supported_io_types": { 00:33:30.496 "read": true, 00:33:30.496 "write": true, 00:33:30.496 "unmap": true, 00:33:30.496 "write_zeroes": true, 00:33:30.496 "flush": true, 00:33:30.496 "reset": true, 00:33:30.496 "compare": false, 00:33:30.496 "compare_and_write": false, 00:33:30.496 "abort": true, 00:33:30.496 "nvme_admin": false, 00:33:30.496 "nvme_io": false 00:33:30.496 }, 00:33:30.496 "memory_domains": [ 00:33:30.496 { 00:33:30.496 "dma_device_id": "system", 00:33:30.496 "dma_device_type": 1 00:33:30.496 }, 00:33:30.496 { 00:33:30.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:30.497 "dma_device_type": 2 00:33:30.497 } 00:33:30.497 ], 00:33:30.497 "driver_specific": { 00:33:30.497 "passthru": { 00:33:30.497 "name": "pt4", 00:33:30.497 "base_bdev_name": "malloc4" 00:33:30.497 } 00:33:30.497 } 00:33:30.497 }' 00:33:30.497 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:30.755 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:30.755 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:30.755 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:30.755 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:30.755 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:30.755 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:30.755 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:31.013 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:31.013 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:31.013 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:31.013 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:31.013 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:31.013 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:33:31.271 [2024-07-21 12:15:29.907918] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:31.271 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=2d666eea-5dbd-4ac2-a560-41b54b642405 00:33:31.271 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 2d666eea-5dbd-4ac2-a560-41b54b642405 ']' 00:33:31.271 12:15:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:31.272 [2024-07-21 12:15:30.119892] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:31.272 [2024-07-21 12:15:30.120058] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:31.272 [2024-07-21 12:15:30.120270] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:31.272 [2024-07-21 12:15:30.120489] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:31.272 [2024-07-21 12:15:30.120599] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009380 name raid_bdev1, state offline 00:33:31.272 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:31.272 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:33:31.838 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:33:31.838 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:33:31.838 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:31.838 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:31.838 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:31.838 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:32.096 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:32.096 12:15:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:32.354 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:33:32.354 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:32.611 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:33:32.611 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:32.868 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:33:33.135 [2024-07-21 12:15:31.752081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:33:33.135 [2024-07-21 12:15:31.754088] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:33:33.135 [2024-07-21 12:15:31.754310] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:33:33.135 [2024-07-21 12:15:31.754507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:33:33.135 [2024-07-21 12:15:31.754702] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:33:33.135 [2024-07-21 12:15:31.754921] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:33:33.135 [2024-07-21 12:15:31.755111] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:33:33.135 [2024-07-21 12:15:31.755278] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:33:33.135 [2024-07-21 12:15:31.755419] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:33.135 [2024-07-21 12:15:31.755548] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state configuring 00:33:33.135 request: 00:33:33.135 { 00:33:33.135 "name": "raid_bdev1", 00:33:33.135 "raid_level": "raid5f", 00:33:33.135 "base_bdevs": [ 00:33:33.135 "malloc1", 00:33:33.135 "malloc2", 00:33:33.135 "malloc3", 00:33:33.135 "malloc4" 00:33:33.135 ], 00:33:33.135 "superblock": false, 00:33:33.135 "strip_size_kb": 64, 00:33:33.135 "method": "bdev_raid_create", 00:33:33.135 "req_id": 1 00:33:33.135 } 00:33:33.135 Got JSON-RPC error response 00:33:33.135 response: 00:33:33.135 { 00:33:33.135 "code": -17, 00:33:33.135 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:33:33.135 } 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:33:33.135 12:15:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:33.393 [2024-07-21 12:15:32.184364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:33.393 [2024-07-21 12:15:32.184562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:33.393 [2024-07-21 12:15:32.184633] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:33.393 [2024-07-21 12:15:32.184893] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:33.393 [2024-07-21 12:15:32.186887] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:33.393 [2024-07-21 12:15:32.187095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:33.393 [2024-07-21 12:15:32.187278] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:33.393 [2024-07-21 12:15:32.187444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:33.393 pt1 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:33.393 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.651 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:33.651 "name": "raid_bdev1", 00:33:33.651 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:33.651 "strip_size_kb": 64, 00:33:33.651 "state": "configuring", 00:33:33.651 "raid_level": "raid5f", 00:33:33.651 "superblock": true, 00:33:33.651 "num_base_bdevs": 4, 00:33:33.651 "num_base_bdevs_discovered": 1, 00:33:33.651 "num_base_bdevs_operational": 4, 00:33:33.651 "base_bdevs_list": [ 00:33:33.651 { 00:33:33.651 "name": "pt1", 00:33:33.651 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:33.651 "is_configured": true, 00:33:33.651 "data_offset": 2048, 00:33:33.651 "data_size": 63488 00:33:33.651 }, 00:33:33.651 { 00:33:33.651 "name": null, 00:33:33.651 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:33.651 "is_configured": false, 00:33:33.651 "data_offset": 2048, 00:33:33.651 "data_size": 63488 00:33:33.651 }, 00:33:33.651 { 00:33:33.651 "name": null, 00:33:33.651 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:33.651 "is_configured": false, 00:33:33.651 "data_offset": 2048, 00:33:33.651 "data_size": 63488 00:33:33.651 }, 00:33:33.651 { 00:33:33.651 "name": null, 00:33:33.651 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:33.651 "is_configured": false, 00:33:33.651 "data_offset": 2048, 00:33:33.651 "data_size": 63488 00:33:33.651 } 00:33:33.651 ] 00:33:33.651 }' 00:33:33.651 12:15:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:33.651 12:15:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:34.218 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:33:34.218 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:34.476 [2024-07-21 12:15:33.268601] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:34.476 [2024-07-21 12:15:33.268831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:34.476 [2024-07-21 12:15:33.268913] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:34.476 [2024-07-21 12:15:33.269213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:34.476 [2024-07-21 12:15:33.269758] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:34.476 [2024-07-21 12:15:33.269936] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:34.476 [2024-07-21 12:15:33.270134] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:34.476 [2024-07-21 12:15:33.270256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:34.476 pt2 00:33:34.476 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:34.734 [2024-07-21 12:15:33.456660] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.734 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:34.993 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:34.993 "name": "raid_bdev1", 00:33:34.993 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:34.993 "strip_size_kb": 64, 00:33:34.993 "state": "configuring", 00:33:34.993 "raid_level": "raid5f", 00:33:34.993 "superblock": true, 00:33:34.993 "num_base_bdevs": 4, 00:33:34.993 "num_base_bdevs_discovered": 1, 00:33:34.993 "num_base_bdevs_operational": 4, 00:33:34.993 "base_bdevs_list": [ 00:33:34.993 { 00:33:34.993 "name": "pt1", 00:33:34.993 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:34.993 "is_configured": true, 00:33:34.993 "data_offset": 2048, 00:33:34.993 "data_size": 63488 00:33:34.993 }, 00:33:34.993 { 00:33:34.993 "name": null, 00:33:34.993 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:34.993 "is_configured": false, 00:33:34.993 "data_offset": 2048, 00:33:34.993 "data_size": 63488 00:33:34.993 }, 00:33:34.993 { 00:33:34.993 "name": null, 00:33:34.993 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:34.993 "is_configured": false, 00:33:34.993 "data_offset": 2048, 00:33:34.993 "data_size": 63488 00:33:34.993 }, 00:33:34.993 { 00:33:34.993 "name": null, 00:33:34.993 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:34.993 "is_configured": false, 00:33:34.993 "data_offset": 2048, 00:33:34.993 "data_size": 63488 00:33:34.993 } 00:33:34.993 ] 00:33:34.993 }' 00:33:34.993 12:15:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:34.993 12:15:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:35.559 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:33:35.559 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:35.559 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:35.817 [2024-07-21 12:15:34.672843] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:35.817 [2024-07-21 12:15:34.673039] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:35.817 [2024-07-21 12:15:34.673123] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:33:35.817 [2024-07-21 12:15:34.673371] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:35.817 [2024-07-21 12:15:34.673757] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:35.817 [2024-07-21 12:15:34.673936] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:35.817 [2024-07-21 12:15:34.674110] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:35.817 [2024-07-21 12:15:34.674246] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:35.817 pt2 00:33:36.076 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:36.076 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:36.076 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:36.076 [2024-07-21 12:15:34.932883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:36.076 [2024-07-21 12:15:34.933071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.076 [2024-07-21 12:15:34.933155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:33:36.076 [2024-07-21 12:15:34.933413] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.076 [2024-07-21 12:15:34.933895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.076 [2024-07-21 12:15:34.934064] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:36.076 [2024-07-21 12:15:34.934234] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:36.076 [2024-07-21 12:15:34.934380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:36.076 pt3 00:33:36.334 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:36.334 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:36.334 12:15:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:36.334 [2024-07-21 12:15:35.184931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:36.334 [2024-07-21 12:15:35.185124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:36.334 [2024-07-21 12:15:35.185197] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:33:36.334 [2024-07-21 12:15:35.185326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:36.334 [2024-07-21 12:15:35.185697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:36.334 [2024-07-21 12:15:35.185887] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:36.334 [2024-07-21 12:15:35.186059] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:36.334 [2024-07-21 12:15:35.186193] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:36.334 [2024-07-21 12:15:35.186432] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:33:36.334 [2024-07-21 12:15:35.186548] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:36.334 [2024-07-21 12:15:35.186739] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:36.334 [2024-07-21 12:15:35.187520] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:33:36.334 [2024-07-21 12:15:35.187651] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:33:36.334 [2024-07-21 12:15:35.187842] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:36.334 pt4 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:36.592 "name": "raid_bdev1", 00:33:36.592 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:36.592 "strip_size_kb": 64, 00:33:36.592 "state": "online", 00:33:36.592 "raid_level": "raid5f", 00:33:36.592 "superblock": true, 00:33:36.592 "num_base_bdevs": 4, 00:33:36.592 "num_base_bdevs_discovered": 4, 00:33:36.592 "num_base_bdevs_operational": 4, 00:33:36.592 "base_bdevs_list": [ 00:33:36.592 { 00:33:36.592 "name": "pt1", 00:33:36.592 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:36.592 "is_configured": true, 00:33:36.592 "data_offset": 2048, 00:33:36.592 "data_size": 63488 00:33:36.592 }, 00:33:36.592 { 00:33:36.592 "name": "pt2", 00:33:36.592 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:36.592 "is_configured": true, 00:33:36.592 "data_offset": 2048, 00:33:36.592 "data_size": 63488 00:33:36.592 }, 00:33:36.592 { 00:33:36.592 "name": "pt3", 00:33:36.592 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:36.592 "is_configured": true, 00:33:36.592 "data_offset": 2048, 00:33:36.592 "data_size": 63488 00:33:36.592 }, 00:33:36.592 { 00:33:36.592 "name": "pt4", 00:33:36.592 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:36.592 "is_configured": true, 00:33:36.592 "data_offset": 2048, 00:33:36.592 "data_size": 63488 00:33:36.592 } 00:33:36.592 ] 00:33:36.592 }' 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:36.592 12:15:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:37.169 12:15:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:33:37.427 [2024-07-21 12:15:36.242080] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:37.427 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:33:37.427 "name": "raid_bdev1", 00:33:37.427 "aliases": [ 00:33:37.427 "2d666eea-5dbd-4ac2-a560-41b54b642405" 00:33:37.427 ], 00:33:37.427 "product_name": "Raid Volume", 00:33:37.427 "block_size": 512, 00:33:37.427 "num_blocks": 190464, 00:33:37.427 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:37.427 "assigned_rate_limits": { 00:33:37.427 "rw_ios_per_sec": 0, 00:33:37.427 "rw_mbytes_per_sec": 0, 00:33:37.427 "r_mbytes_per_sec": 0, 00:33:37.427 "w_mbytes_per_sec": 0 00:33:37.427 }, 00:33:37.427 "claimed": false, 00:33:37.427 "zoned": false, 00:33:37.427 "supported_io_types": { 00:33:37.427 "read": true, 00:33:37.427 "write": true, 00:33:37.427 "unmap": false, 00:33:37.427 "write_zeroes": true, 00:33:37.427 "flush": false, 00:33:37.427 "reset": true, 00:33:37.427 "compare": false, 00:33:37.427 "compare_and_write": false, 00:33:37.427 "abort": false, 00:33:37.427 "nvme_admin": false, 00:33:37.427 "nvme_io": false 00:33:37.427 }, 00:33:37.427 "driver_specific": { 00:33:37.427 "raid": { 00:33:37.427 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:37.427 "strip_size_kb": 64, 00:33:37.427 "state": "online", 00:33:37.427 "raid_level": "raid5f", 00:33:37.427 "superblock": true, 00:33:37.427 "num_base_bdevs": 4, 00:33:37.427 "num_base_bdevs_discovered": 4, 00:33:37.427 "num_base_bdevs_operational": 4, 00:33:37.427 "base_bdevs_list": [ 00:33:37.427 { 00:33:37.427 "name": "pt1", 00:33:37.427 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:37.427 "is_configured": true, 00:33:37.427 "data_offset": 2048, 00:33:37.427 "data_size": 63488 00:33:37.427 }, 00:33:37.427 { 00:33:37.427 "name": "pt2", 00:33:37.427 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:37.427 "is_configured": true, 00:33:37.427 "data_offset": 2048, 00:33:37.427 "data_size": 63488 00:33:37.427 }, 00:33:37.427 { 00:33:37.427 "name": "pt3", 00:33:37.427 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:37.427 "is_configured": true, 00:33:37.427 "data_offset": 2048, 00:33:37.427 "data_size": 63488 00:33:37.427 }, 00:33:37.427 { 00:33:37.427 "name": "pt4", 00:33:37.427 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:37.427 "is_configured": true, 00:33:37.427 "data_offset": 2048, 00:33:37.427 "data_size": 63488 00:33:37.427 } 00:33:37.427 ] 00:33:37.427 } 00:33:37.427 } 00:33:37.427 }' 00:33:37.427 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:37.686 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:33:37.686 pt2 00:33:37.686 pt3 00:33:37.686 pt4' 00:33:37.686 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:37.686 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:37.686 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:33:37.686 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:37.686 "name": "pt1", 00:33:37.686 "aliases": [ 00:33:37.686 "6178f4c5-402c-5cc7-8582-1918a6450662" 00:33:37.686 ], 00:33:37.686 "product_name": "passthru", 00:33:37.686 "block_size": 512, 00:33:37.686 "num_blocks": 65536, 00:33:37.686 "uuid": "6178f4c5-402c-5cc7-8582-1918a6450662", 00:33:37.686 "assigned_rate_limits": { 00:33:37.686 "rw_ios_per_sec": 0, 00:33:37.686 "rw_mbytes_per_sec": 0, 00:33:37.686 "r_mbytes_per_sec": 0, 00:33:37.686 "w_mbytes_per_sec": 0 00:33:37.686 }, 00:33:37.686 "claimed": true, 00:33:37.686 "claim_type": "exclusive_write", 00:33:37.686 "zoned": false, 00:33:37.686 "supported_io_types": { 00:33:37.686 "read": true, 00:33:37.686 "write": true, 00:33:37.686 "unmap": true, 00:33:37.686 "write_zeroes": true, 00:33:37.686 "flush": true, 00:33:37.686 "reset": true, 00:33:37.686 "compare": false, 00:33:37.686 "compare_and_write": false, 00:33:37.686 "abort": true, 00:33:37.686 "nvme_admin": false, 00:33:37.686 "nvme_io": false 00:33:37.686 }, 00:33:37.686 "memory_domains": [ 00:33:37.686 { 00:33:37.686 "dma_device_id": "system", 00:33:37.686 "dma_device_type": 1 00:33:37.686 }, 00:33:37.686 { 00:33:37.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:37.686 "dma_device_type": 2 00:33:37.686 } 00:33:37.686 ], 00:33:37.686 "driver_specific": { 00:33:37.686 "passthru": { 00:33:37.686 "name": "pt1", 00:33:37.686 "base_bdev_name": "malloc1" 00:33:37.686 } 00:33:37.686 } 00:33:37.686 }' 00:33:37.686 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:37.944 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:37.944 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:37.944 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:37.944 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:37.944 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:37.944 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:37.944 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:38.202 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:38.202 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:38.202 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:38.202 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:38.202 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:38.202 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:33:38.202 12:15:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:38.460 "name": "pt2", 00:33:38.460 "aliases": [ 00:33:38.460 "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac" 00:33:38.460 ], 00:33:38.460 "product_name": "passthru", 00:33:38.460 "block_size": 512, 00:33:38.460 "num_blocks": 65536, 00:33:38.460 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:38.460 "assigned_rate_limits": { 00:33:38.460 "rw_ios_per_sec": 0, 00:33:38.460 "rw_mbytes_per_sec": 0, 00:33:38.460 "r_mbytes_per_sec": 0, 00:33:38.460 "w_mbytes_per_sec": 0 00:33:38.460 }, 00:33:38.460 "claimed": true, 00:33:38.460 "claim_type": "exclusive_write", 00:33:38.460 "zoned": false, 00:33:38.460 "supported_io_types": { 00:33:38.460 "read": true, 00:33:38.460 "write": true, 00:33:38.460 "unmap": true, 00:33:38.460 "write_zeroes": true, 00:33:38.460 "flush": true, 00:33:38.460 "reset": true, 00:33:38.460 "compare": false, 00:33:38.460 "compare_and_write": false, 00:33:38.460 "abort": true, 00:33:38.460 "nvme_admin": false, 00:33:38.460 "nvme_io": false 00:33:38.460 }, 00:33:38.460 "memory_domains": [ 00:33:38.460 { 00:33:38.460 "dma_device_id": "system", 00:33:38.460 "dma_device_type": 1 00:33:38.460 }, 00:33:38.460 { 00:33:38.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:38.460 "dma_device_type": 2 00:33:38.460 } 00:33:38.460 ], 00:33:38.460 "driver_specific": { 00:33:38.460 "passthru": { 00:33:38.460 "name": "pt2", 00:33:38.460 "base_bdev_name": "malloc2" 00:33:38.460 } 00:33:38.460 } 00:33:38.460 }' 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:38.460 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:33:38.718 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:38.976 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:38.976 "name": "pt3", 00:33:38.976 "aliases": [ 00:33:38.976 "d989c8bf-ef76-537f-b124-0e4e13640b74" 00:33:38.976 ], 00:33:38.976 "product_name": "passthru", 00:33:38.976 "block_size": 512, 00:33:38.976 "num_blocks": 65536, 00:33:38.976 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:38.976 "assigned_rate_limits": { 00:33:38.976 "rw_ios_per_sec": 0, 00:33:38.976 "rw_mbytes_per_sec": 0, 00:33:38.976 "r_mbytes_per_sec": 0, 00:33:38.976 "w_mbytes_per_sec": 0 00:33:38.976 }, 00:33:38.976 "claimed": true, 00:33:38.976 "claim_type": "exclusive_write", 00:33:38.976 "zoned": false, 00:33:38.976 "supported_io_types": { 00:33:38.976 "read": true, 00:33:38.976 "write": true, 00:33:38.976 "unmap": true, 00:33:38.976 "write_zeroes": true, 00:33:38.976 "flush": true, 00:33:38.976 "reset": true, 00:33:38.976 "compare": false, 00:33:38.976 "compare_and_write": false, 00:33:38.976 "abort": true, 00:33:38.976 "nvme_admin": false, 00:33:38.976 "nvme_io": false 00:33:38.976 }, 00:33:38.976 "memory_domains": [ 00:33:38.976 { 00:33:38.976 "dma_device_id": "system", 00:33:38.976 "dma_device_type": 1 00:33:38.976 }, 00:33:38.976 { 00:33:38.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:38.976 "dma_device_type": 2 00:33:38.976 } 00:33:38.976 ], 00:33:38.976 "driver_specific": { 00:33:38.976 "passthru": { 00:33:38.976 "name": "pt3", 00:33:38.976 "base_bdev_name": "malloc3" 00:33:38.976 } 00:33:38.976 } 00:33:38.976 }' 00:33:38.976 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:38.976 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:39.233 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:39.234 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:39.234 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:39.234 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:39.234 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:39.234 12:15:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:39.234 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:39.234 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:39.492 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:39.492 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:39.492 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:33:39.492 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:33:39.492 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:33:39.492 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:33:39.492 "name": "pt4", 00:33:39.492 "aliases": [ 00:33:39.492 "70c3917d-a850-5147-9a19-1d1c02bc1f8e" 00:33:39.492 ], 00:33:39.492 "product_name": "passthru", 00:33:39.492 "block_size": 512, 00:33:39.492 "num_blocks": 65536, 00:33:39.492 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:39.492 "assigned_rate_limits": { 00:33:39.492 "rw_ios_per_sec": 0, 00:33:39.492 "rw_mbytes_per_sec": 0, 00:33:39.492 "r_mbytes_per_sec": 0, 00:33:39.492 "w_mbytes_per_sec": 0 00:33:39.492 }, 00:33:39.492 "claimed": true, 00:33:39.492 "claim_type": "exclusive_write", 00:33:39.492 "zoned": false, 00:33:39.492 "supported_io_types": { 00:33:39.492 "read": true, 00:33:39.492 "write": true, 00:33:39.492 "unmap": true, 00:33:39.492 "write_zeroes": true, 00:33:39.492 "flush": true, 00:33:39.492 "reset": true, 00:33:39.492 "compare": false, 00:33:39.492 "compare_and_write": false, 00:33:39.492 "abort": true, 00:33:39.492 "nvme_admin": false, 00:33:39.492 "nvme_io": false 00:33:39.492 }, 00:33:39.492 "memory_domains": [ 00:33:39.492 { 00:33:39.492 "dma_device_id": "system", 00:33:39.492 "dma_device_type": 1 00:33:39.492 }, 00:33:39.492 { 00:33:39.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:39.492 "dma_device_type": 2 00:33:39.492 } 00:33:39.492 ], 00:33:39.492 "driver_specific": { 00:33:39.492 "passthru": { 00:33:39.492 "name": "pt4", 00:33:39.492 "base_bdev_name": "malloc4" 00:33:39.492 } 00:33:39.492 } 00:33:39.492 }' 00:33:39.492 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:39.750 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:33:39.750 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:33:39.750 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:39.750 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:33:39.750 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:33:39.750 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:39.750 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:33:40.008 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:33:40.008 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:40.008 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:33:40.008 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:33:40.008 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:40.008 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:33:40.265 [2024-07-21 12:15:38.986600] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:40.265 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 2d666eea-5dbd-4ac2-a560-41b54b642405 '!=' 2d666eea-5dbd-4ac2-a560-41b54b642405 ']' 00:33:40.265 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid5f 00:33:40.265 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:33:40.265 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:33:40.265 12:15:38 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:33:40.523 [2024-07-21 12:15:39.246544] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:40.523 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.781 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:40.781 "name": "raid_bdev1", 00:33:40.781 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:40.781 "strip_size_kb": 64, 00:33:40.781 "state": "online", 00:33:40.781 "raid_level": "raid5f", 00:33:40.781 "superblock": true, 00:33:40.781 "num_base_bdevs": 4, 00:33:40.781 "num_base_bdevs_discovered": 3, 00:33:40.781 "num_base_bdevs_operational": 3, 00:33:40.781 "base_bdevs_list": [ 00:33:40.781 { 00:33:40.781 "name": null, 00:33:40.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.781 "is_configured": false, 00:33:40.781 "data_offset": 2048, 00:33:40.781 "data_size": 63488 00:33:40.781 }, 00:33:40.781 { 00:33:40.781 "name": "pt2", 00:33:40.781 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:40.781 "is_configured": true, 00:33:40.781 "data_offset": 2048, 00:33:40.781 "data_size": 63488 00:33:40.781 }, 00:33:40.781 { 00:33:40.781 "name": "pt3", 00:33:40.781 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:40.781 "is_configured": true, 00:33:40.781 "data_offset": 2048, 00:33:40.781 "data_size": 63488 00:33:40.781 }, 00:33:40.781 { 00:33:40.781 "name": "pt4", 00:33:40.781 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:40.781 "is_configured": true, 00:33:40.781 "data_offset": 2048, 00:33:40.781 "data_size": 63488 00:33:40.781 } 00:33:40.781 ] 00:33:40.781 }' 00:33:40.781 12:15:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:40.781 12:15:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:41.347 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:41.347 [2024-07-21 12:15:40.210742] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:41.347 [2024-07-21 12:15:40.210776] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:41.347 [2024-07-21 12:15:40.210866] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:41.347 [2024-07-21 12:15:40.210996] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:41.347 [2024-07-21 12:15:40.211009] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:33:41.605 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:41.605 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:33:41.605 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:33:41.605 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:33:41.605 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:33:41.605 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:41.605 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:33:41.862 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:41.862 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:41.862 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:33:42.119 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:42.119 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:42.119 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:42.119 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:33:42.119 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:33:42.119 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:33:42.119 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:42.120 12:15:40 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:33:42.377 [2024-07-21 12:15:41.142871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:33:42.377 [2024-07-21 12:15:41.142963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:42.377 [2024-07-21 12:15:41.142997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:33:42.377 [2024-07-21 12:15:41.143031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:42.377 [2024-07-21 12:15:41.145512] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:42.377 [2024-07-21 12:15:41.145580] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:33:42.377 [2024-07-21 12:15:41.145673] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:33:42.377 [2024-07-21 12:15:41.145719] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:42.377 pt2 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:42.377 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.635 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:42.635 "name": "raid_bdev1", 00:33:42.635 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:42.635 "strip_size_kb": 64, 00:33:42.635 "state": "configuring", 00:33:42.635 "raid_level": "raid5f", 00:33:42.635 "superblock": true, 00:33:42.635 "num_base_bdevs": 4, 00:33:42.635 "num_base_bdevs_discovered": 1, 00:33:42.635 "num_base_bdevs_operational": 3, 00:33:42.636 "base_bdevs_list": [ 00:33:42.636 { 00:33:42.636 "name": null, 00:33:42.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.636 "is_configured": false, 00:33:42.636 "data_offset": 2048, 00:33:42.636 "data_size": 63488 00:33:42.636 }, 00:33:42.636 { 00:33:42.636 "name": "pt2", 00:33:42.636 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:42.636 "is_configured": true, 00:33:42.636 "data_offset": 2048, 00:33:42.636 "data_size": 63488 00:33:42.636 }, 00:33:42.636 { 00:33:42.636 "name": null, 00:33:42.636 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:42.636 "is_configured": false, 00:33:42.636 "data_offset": 2048, 00:33:42.636 "data_size": 63488 00:33:42.636 }, 00:33:42.636 { 00:33:42.636 "name": null, 00:33:42.636 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:42.636 "is_configured": false, 00:33:42.636 "data_offset": 2048, 00:33:42.636 "data_size": 63488 00:33:42.636 } 00:33:42.636 ] 00:33:42.636 }' 00:33:42.636 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:42.636 12:15:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:43.202 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:43.202 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:43.202 12:15:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:33:43.460 [2024-07-21 12:15:42.136043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:33:43.460 [2024-07-21 12:15:42.136092] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.460 [2024-07-21 12:15:42.136126] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:33:43.460 [2024-07-21 12:15:42.136146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.460 [2024-07-21 12:15:42.136507] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.460 [2024-07-21 12:15:42.136549] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:33:43.460 [2024-07-21 12:15:42.136613] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:33:43.460 [2024-07-21 12:15:42.136633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:43.460 pt3 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:43.460 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:43.719 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:43.719 "name": "raid_bdev1", 00:33:43.719 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:43.719 "strip_size_kb": 64, 00:33:43.719 "state": "configuring", 00:33:43.719 "raid_level": "raid5f", 00:33:43.719 "superblock": true, 00:33:43.719 "num_base_bdevs": 4, 00:33:43.719 "num_base_bdevs_discovered": 2, 00:33:43.719 "num_base_bdevs_operational": 3, 00:33:43.719 "base_bdevs_list": [ 00:33:43.719 { 00:33:43.719 "name": null, 00:33:43.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:43.719 "is_configured": false, 00:33:43.719 "data_offset": 2048, 00:33:43.719 "data_size": 63488 00:33:43.719 }, 00:33:43.719 { 00:33:43.719 "name": "pt2", 00:33:43.719 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:43.719 "is_configured": true, 00:33:43.719 "data_offset": 2048, 00:33:43.719 "data_size": 63488 00:33:43.719 }, 00:33:43.719 { 00:33:43.719 "name": "pt3", 00:33:43.719 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:43.719 "is_configured": true, 00:33:43.719 "data_offset": 2048, 00:33:43.719 "data_size": 63488 00:33:43.719 }, 00:33:43.719 { 00:33:43.719 "name": null, 00:33:43.719 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:43.719 "is_configured": false, 00:33:43.719 "data_offset": 2048, 00:33:43.719 "data_size": 63488 00:33:43.719 } 00:33:43.719 ] 00:33:43.719 }' 00:33:43.719 12:15:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:43.719 12:15:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:44.286 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:33:44.286 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:33:44.286 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:33:44.286 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:44.550 [2024-07-21 12:15:43.196232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:44.550 [2024-07-21 12:15:43.196292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:44.550 [2024-07-21 12:15:43.196333] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:33:44.550 [2024-07-21 12:15:43.196352] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:44.550 [2024-07-21 12:15:43.196685] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:44.550 [2024-07-21 12:15:43.196718] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:44.550 [2024-07-21 12:15:43.196779] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:44.550 [2024-07-21 12:15:43.196806] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:44.550 [2024-07-21 12:15:43.196921] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:33:44.550 [2024-07-21 12:15:43.196933] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:44.550 [2024-07-21 12:15:43.196995] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:33:44.550 [2024-07-21 12:15:43.197761] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:33:44.550 [2024-07-21 12:15:43.197775] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:33:44.550 [2024-07-21 12:15:43.197997] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:44.550 pt4 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:44.550 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:44.551 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:44.551 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.551 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:44.551 "name": "raid_bdev1", 00:33:44.551 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:44.551 "strip_size_kb": 64, 00:33:44.551 "state": "online", 00:33:44.551 "raid_level": "raid5f", 00:33:44.551 "superblock": true, 00:33:44.551 "num_base_bdevs": 4, 00:33:44.551 "num_base_bdevs_discovered": 3, 00:33:44.551 "num_base_bdevs_operational": 3, 00:33:44.551 "base_bdevs_list": [ 00:33:44.551 { 00:33:44.551 "name": null, 00:33:44.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.551 "is_configured": false, 00:33:44.551 "data_offset": 2048, 00:33:44.551 "data_size": 63488 00:33:44.551 }, 00:33:44.551 { 00:33:44.551 "name": "pt2", 00:33:44.551 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:44.551 "is_configured": true, 00:33:44.551 "data_offset": 2048, 00:33:44.551 "data_size": 63488 00:33:44.551 }, 00:33:44.551 { 00:33:44.551 "name": "pt3", 00:33:44.551 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:44.551 "is_configured": true, 00:33:44.551 "data_offset": 2048, 00:33:44.551 "data_size": 63488 00:33:44.551 }, 00:33:44.551 { 00:33:44.551 "name": "pt4", 00:33:44.551 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:44.551 "is_configured": true, 00:33:44.551 "data_offset": 2048, 00:33:44.551 "data_size": 63488 00:33:44.551 } 00:33:44.551 ] 00:33:44.551 }' 00:33:44.551 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:44.551 12:15:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:45.484 12:15:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:33:45.484 [2024-07-21 12:15:44.256563] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:45.484 [2024-07-21 12:15:44.256599] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:45.484 [2024-07-21 12:15:44.256681] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:45.484 [2024-07-21 12:15:44.256761] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:45.484 [2024-07-21 12:15:44.256773] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:33:45.484 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:45.484 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:33:45.742 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:33:45.742 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:33:45.742 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:33:45.742 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:33:45.742 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:33:46.000 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:33:46.258 [2024-07-21 12:15:44.906904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:33:46.258 [2024-07-21 12:15:44.907372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:46.258 [2024-07-21 12:15:44.907544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:33:46.258 [2024-07-21 12:15:44.907687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:46.258 [2024-07-21 12:15:44.909885] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:46.258 [2024-07-21 12:15:44.910072] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:33:46.258 [2024-07-21 12:15:44.910278] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:33:46.258 [2024-07-21 12:15:44.910321] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:33:46.258 [2024-07-21 12:15:44.910519] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:33:46.258 [2024-07-21 12:15:44.910546] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:46.258 [2024-07-21 12:15:44.910591] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c980 name raid_bdev1, state configuring 00:33:46.258 [2024-07-21 12:15:44.910652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:33:46.258 [2024-07-21 12:15:44.910802] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:33:46.258 pt1 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.258 12:15:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:46.258 12:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:46.258 "name": "raid_bdev1", 00:33:46.258 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:46.258 "strip_size_kb": 64, 00:33:46.258 "state": "configuring", 00:33:46.258 "raid_level": "raid5f", 00:33:46.258 "superblock": true, 00:33:46.258 "num_base_bdevs": 4, 00:33:46.258 "num_base_bdevs_discovered": 2, 00:33:46.258 "num_base_bdevs_operational": 3, 00:33:46.258 "base_bdevs_list": [ 00:33:46.258 { 00:33:46.258 "name": null, 00:33:46.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.258 "is_configured": false, 00:33:46.258 "data_offset": 2048, 00:33:46.258 "data_size": 63488 00:33:46.258 }, 00:33:46.258 { 00:33:46.258 "name": "pt2", 00:33:46.258 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:46.258 "is_configured": true, 00:33:46.258 "data_offset": 2048, 00:33:46.258 "data_size": 63488 00:33:46.258 }, 00:33:46.258 { 00:33:46.258 "name": "pt3", 00:33:46.258 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:46.258 "is_configured": true, 00:33:46.258 "data_offset": 2048, 00:33:46.258 "data_size": 63488 00:33:46.258 }, 00:33:46.258 { 00:33:46.258 "name": null, 00:33:46.258 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:46.258 "is_configured": false, 00:33:46.258 "data_offset": 2048, 00:33:46.258 "data_size": 63488 00:33:46.258 } 00:33:46.258 ] 00:33:46.258 }' 00:33:46.258 12:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:46.258 12:15:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:47.191 12:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:33:47.191 12:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:47.191 12:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:33:47.191 12:15:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:33:47.450 [2024-07-21 12:15:46.120710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:33:47.450 [2024-07-21 12:15:46.120795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:47.450 [2024-07-21 12:15:46.120832] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:33:47.450 [2024-07-21 12:15:46.120859] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:47.450 [2024-07-21 12:15:46.121357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:47.450 [2024-07-21 12:15:46.121433] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:33:47.450 [2024-07-21 12:15:46.121528] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:33:47.450 [2024-07-21 12:15:46.121554] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:33:47.450 [2024-07-21 12:15:46.121695] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000cc80 00:33:47.450 [2024-07-21 12:15:46.121715] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:33:47.450 [2024-07-21 12:15:46.121797] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:33:47.450 [2024-07-21 12:15:46.122550] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000cc80 00:33:47.450 [2024-07-21 12:15:46.122585] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000cc80 00:33:47.450 [2024-07-21 12:15:46.122751] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:47.450 pt4 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:47.450 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.709 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:47.709 "name": "raid_bdev1", 00:33:47.709 "uuid": "2d666eea-5dbd-4ac2-a560-41b54b642405", 00:33:47.709 "strip_size_kb": 64, 00:33:47.709 "state": "online", 00:33:47.709 "raid_level": "raid5f", 00:33:47.709 "superblock": true, 00:33:47.709 "num_base_bdevs": 4, 00:33:47.709 "num_base_bdevs_discovered": 3, 00:33:47.709 "num_base_bdevs_operational": 3, 00:33:47.709 "base_bdevs_list": [ 00:33:47.709 { 00:33:47.709 "name": null, 00:33:47.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.709 "is_configured": false, 00:33:47.709 "data_offset": 2048, 00:33:47.709 "data_size": 63488 00:33:47.709 }, 00:33:47.709 { 00:33:47.709 "name": "pt2", 00:33:47.709 "uuid": "e0532a4d-7f5e-5a5b-908b-6a3adaad6eac", 00:33:47.709 "is_configured": true, 00:33:47.709 "data_offset": 2048, 00:33:47.709 "data_size": 63488 00:33:47.709 }, 00:33:47.709 { 00:33:47.709 "name": "pt3", 00:33:47.709 "uuid": "d989c8bf-ef76-537f-b124-0e4e13640b74", 00:33:47.709 "is_configured": true, 00:33:47.709 "data_offset": 2048, 00:33:47.709 "data_size": 63488 00:33:47.709 }, 00:33:47.709 { 00:33:47.709 "name": "pt4", 00:33:47.709 "uuid": "70c3917d-a850-5147-9a19-1d1c02bc1f8e", 00:33:47.709 "is_configured": true, 00:33:47.709 "data_offset": 2048, 00:33:47.709 "data_size": 63488 00:33:47.709 } 00:33:47.709 ] 00:33:47.709 }' 00:33:47.709 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:47.709 12:15:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:48.274 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:33:48.275 12:15:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:33:48.532 12:15:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:33:48.532 12:15:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:33:48.532 12:15:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:48.532 [2024-07-21 12:15:47.389073] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 2d666eea-5dbd-4ac2-a560-41b54b642405 '!=' 2d666eea-5dbd-4ac2-a560-41b54b642405 ']' 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 166485 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@946 -- # '[' -z 166485 ']' 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # kill -0 166485 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # uname 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 166485 00:33:48.791 killing process with pid 166485 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 166485' 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@965 -- # kill 166485 00:33:48.791 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # wait 166485 00:33:48.791 [2024-07-21 12:15:47.431323] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:48.791 [2024-07-21 12:15:47.431397] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:48.791 [2024-07-21 12:15:47.431503] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:48.791 [2024-07-21 12:15:47.431520] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state offline 00:33:48.791 [2024-07-21 12:15:47.469686] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:49.050 ************************************ 00:33:49.050 END TEST raid5f_superblock_test 00:33:49.050 ************************************ 00:33:49.050 12:15:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:33:49.050 00:33:49.050 real 0m24.860s 00:33:49.050 user 0m47.284s 00:33:49.050 sys 0m2.848s 00:33:49.050 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:33:49.050 12:15:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.050 12:15:47 bdev_raid -- bdev/bdev_raid.sh@889 -- # '[' true = true ']' 00:33:49.050 12:15:47 bdev_raid -- bdev/bdev_raid.sh@890 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:33:49.050 12:15:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:33:49.050 12:15:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:33:49.050 12:15:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:49.050 ************************************ 00:33:49.050 START TEST raid5f_rebuild_test 00:33:49.050 ************************************ 00:33:49.050 12:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 false false true 00:33:49.050 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:33:49.050 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:33:49.050 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local superblock=false 00:33:49.050 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local verify=true 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local strip_size 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local create_arg 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local data_offset 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@591 -- # '[' false = true ']' 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # raid_pid=167317 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # waitforlisten 167317 /var/tmp/spdk-raid.sock 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@827 -- # '[' -z 167317 ']' 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@832 -- # local max_retries=100 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:33:49.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # xtrace_disable 00:33:49.051 12:15:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.051 [2024-07-21 12:15:47.822453] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:33:49.051 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:49.051 Zero copy mechanism will not be used. 00:33:49.051 [2024-07-21 12:15:47.822732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167317 ] 00:33:49.310 [2024-07-21 12:15:47.987176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.310 [2024-07-21 12:15:48.058182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.310 [2024-07-21 12:15:48.129192] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:49.878 12:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:33:49.878 12:15:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # return 0 00:33:49.878 12:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:49.878 12:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:50.136 BaseBdev1_malloc 00:33:50.136 12:15:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:50.393 [2024-07-21 12:15:49.187593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:50.393 [2024-07-21 12:15:49.187706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:50.393 [2024-07-21 12:15:49.187750] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:33:50.393 [2024-07-21 12:15:49.187799] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:50.393 [2024-07-21 12:15:49.190241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:50.393 [2024-07-21 12:15:49.190298] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:50.393 BaseBdev1 00:33:50.393 12:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:50.393 12:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:50.651 BaseBdev2_malloc 00:33:50.651 12:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:50.909 [2024-07-21 12:15:49.645540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:50.909 [2024-07-21 12:15:49.645601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:50.909 [2024-07-21 12:15:49.645661] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:33:50.909 [2024-07-21 12:15:49.645702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:50.909 [2024-07-21 12:15:49.648039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:50.909 [2024-07-21 12:15:49.648086] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:50.909 BaseBdev2 00:33:50.909 12:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:50.909 12:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:51.167 BaseBdev3_malloc 00:33:51.167 12:15:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:51.425 [2024-07-21 12:15:50.071523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:51.425 [2024-07-21 12:15:50.071628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.425 [2024-07-21 12:15:50.071680] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:33:51.425 [2024-07-21 12:15:50.071727] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.425 [2024-07-21 12:15:50.074208] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.425 [2024-07-21 12:15:50.074261] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:51.425 BaseBdev3 00:33:51.425 12:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:33:51.425 12:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:51.425 BaseBdev4_malloc 00:33:51.425 12:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:51.685 [2024-07-21 12:15:50.476705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:51.685 [2024-07-21 12:15:50.476783] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:51.685 [2024-07-21 12:15:50.476817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:33:51.685 [2024-07-21 12:15:50.476864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:51.685 [2024-07-21 12:15:50.479409] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:51.685 [2024-07-21 12:15:50.479461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:51.685 BaseBdev4 00:33:51.685 12:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:33:51.955 spare_malloc 00:33:51.955 12:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:52.230 spare_delay 00:33:52.230 12:15:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:33:52.488 [2024-07-21 12:15:51.161943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:52.488 [2024-07-21 12:15:51.162023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:52.488 [2024-07-21 12:15:51.162056] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:33:52.488 [2024-07-21 12:15:51.162099] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:52.488 [2024-07-21 12:15:51.164299] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:52.488 [2024-07-21 12:15:51.164352] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:52.488 spare 00:33:52.488 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:33:52.747 [2024-07-21 12:15:51.370036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:52.747 [2024-07-21 12:15:51.372072] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:52.747 [2024-07-21 12:15:51.372160] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:52.747 [2024-07-21 12:15:51.372218] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:52.747 [2024-07-21 12:15:51.372316] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:33:52.747 [2024-07-21 12:15:51.372329] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:33:52.747 [2024-07-21 12:15:51.372468] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:33:52.747 [2024-07-21 12:15:51.373358] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:33:52.747 [2024-07-21 12:15:51.373380] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:33:52.747 [2024-07-21 12:15:51.373559] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:52.747 "name": "raid_bdev1", 00:33:52.747 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:33:52.747 "strip_size_kb": 64, 00:33:52.747 "state": "online", 00:33:52.747 "raid_level": "raid5f", 00:33:52.747 "superblock": false, 00:33:52.747 "num_base_bdevs": 4, 00:33:52.747 "num_base_bdevs_discovered": 4, 00:33:52.747 "num_base_bdevs_operational": 4, 00:33:52.747 "base_bdevs_list": [ 00:33:52.747 { 00:33:52.747 "name": "BaseBdev1", 00:33:52.747 "uuid": "c02f754b-08ab-5e9c-91e9-3ede51b6fb30", 00:33:52.747 "is_configured": true, 00:33:52.747 "data_offset": 0, 00:33:52.747 "data_size": 65536 00:33:52.747 }, 00:33:52.747 { 00:33:52.747 "name": "BaseBdev2", 00:33:52.747 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:33:52.747 "is_configured": true, 00:33:52.747 "data_offset": 0, 00:33:52.747 "data_size": 65536 00:33:52.747 }, 00:33:52.747 { 00:33:52.747 "name": "BaseBdev3", 00:33:52.747 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:33:52.747 "is_configured": true, 00:33:52.747 "data_offset": 0, 00:33:52.747 "data_size": 65536 00:33:52.747 }, 00:33:52.747 { 00:33:52.747 "name": "BaseBdev4", 00:33:52.747 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:33:52.747 "is_configured": true, 00:33:52.747 "data_offset": 0, 00:33:52.747 "data_size": 65536 00:33:52.747 } 00:33:52.747 ] 00:33:52.747 }' 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:52.747 12:15:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.681 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:33:53.681 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:33:53.681 [2024-07-21 12:15:52.452577] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:53.681 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=196608 00:33:53.681 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:53.681 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@618 -- # data_offset=0 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:53.939 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:33:54.196 [2024-07-21 12:15:52.936560] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:54.196 /dev/nbd0 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:33:54.196 12:15:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:54.196 1+0 records in 00:33:54.196 1+0 records out 00:33:54.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272071 s, 15.1 MB/s 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # echo 192 00:33:54.196 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:33:54.769 512+0 records in 00:33:54.769 512+0 records out 00:33:54.769 100663296 bytes (101 MB, 96 MiB) copied, 0.538948 s, 187 MB/s 00:33:54.769 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:33:54.769 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:33:54.769 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:54.769 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:54.769 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:33:54.770 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:54.770 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:55.027 [2024-07-21 12:15:53.779790] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:33:55.027 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:33:55.028 12:15:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:33:55.285 [2024-07-21 12:15:54.039396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:55.285 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:55.544 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:55.544 "name": "raid_bdev1", 00:33:55.544 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:33:55.544 "strip_size_kb": 64, 00:33:55.544 "state": "online", 00:33:55.544 "raid_level": "raid5f", 00:33:55.544 "superblock": false, 00:33:55.544 "num_base_bdevs": 4, 00:33:55.544 "num_base_bdevs_discovered": 3, 00:33:55.544 "num_base_bdevs_operational": 3, 00:33:55.544 "base_bdevs_list": [ 00:33:55.544 { 00:33:55.544 "name": null, 00:33:55.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.544 "is_configured": false, 00:33:55.544 "data_offset": 0, 00:33:55.544 "data_size": 65536 00:33:55.544 }, 00:33:55.544 { 00:33:55.544 "name": "BaseBdev2", 00:33:55.544 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:33:55.544 "is_configured": true, 00:33:55.544 "data_offset": 0, 00:33:55.544 "data_size": 65536 00:33:55.544 }, 00:33:55.544 { 00:33:55.544 "name": "BaseBdev3", 00:33:55.544 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:33:55.544 "is_configured": true, 00:33:55.544 "data_offset": 0, 00:33:55.544 "data_size": 65536 00:33:55.544 }, 00:33:55.544 { 00:33:55.544 "name": "BaseBdev4", 00:33:55.544 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:33:55.544 "is_configured": true, 00:33:55.544 "data_offset": 0, 00:33:55.544 "data_size": 65536 00:33:55.544 } 00:33:55.544 ] 00:33:55.544 }' 00:33:55.544 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:55.544 12:15:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.110 12:15:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:56.368 [2024-07-21 12:15:55.179602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:56.368 [2024-07-21 12:15:55.183964] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b410 00:33:56.368 [2024-07-21 12:15:55.186426] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:56.368 12:15:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # sleep 1 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:57.742 "name": "raid_bdev1", 00:33:57.742 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:33:57.742 "strip_size_kb": 64, 00:33:57.742 "state": "online", 00:33:57.742 "raid_level": "raid5f", 00:33:57.742 "superblock": false, 00:33:57.742 "num_base_bdevs": 4, 00:33:57.742 "num_base_bdevs_discovered": 4, 00:33:57.742 "num_base_bdevs_operational": 4, 00:33:57.742 "process": { 00:33:57.742 "type": "rebuild", 00:33:57.742 "target": "spare", 00:33:57.742 "progress": { 00:33:57.742 "blocks": 23040, 00:33:57.742 "percent": 11 00:33:57.742 } 00:33:57.742 }, 00:33:57.742 "base_bdevs_list": [ 00:33:57.742 { 00:33:57.742 "name": "spare", 00:33:57.742 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:33:57.742 "is_configured": true, 00:33:57.742 "data_offset": 0, 00:33:57.742 "data_size": 65536 00:33:57.742 }, 00:33:57.742 { 00:33:57.742 "name": "BaseBdev2", 00:33:57.742 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:33:57.742 "is_configured": true, 00:33:57.742 "data_offset": 0, 00:33:57.742 "data_size": 65536 00:33:57.742 }, 00:33:57.742 { 00:33:57.742 "name": "BaseBdev3", 00:33:57.742 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:33:57.742 "is_configured": true, 00:33:57.742 "data_offset": 0, 00:33:57.742 "data_size": 65536 00:33:57.742 }, 00:33:57.742 { 00:33:57.742 "name": "BaseBdev4", 00:33:57.742 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:33:57.742 "is_configured": true, 00:33:57.742 "data_offset": 0, 00:33:57.742 "data_size": 65536 00:33:57.742 } 00:33:57.742 ] 00:33:57.742 }' 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:33:57.742 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:33:58.000 [2024-07-21 12:15:56.781369] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:58.000 [2024-07-21 12:15:56.797524] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:58.000 [2024-07-21 12:15:56.797725] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:58.000 [2024-07-21 12:15:56.797783] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:58.000 [2024-07-21 12:15:56.797908] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:58.000 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:33:58.000 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:33:58.000 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:33:58.000 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:33:58.000 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:33:58.000 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:33:58.001 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:33:58.001 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:33:58.001 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:33:58.001 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:33:58.001 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:58.001 12:15:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.259 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:33:58.259 "name": "raid_bdev1", 00:33:58.259 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:33:58.259 "strip_size_kb": 64, 00:33:58.259 "state": "online", 00:33:58.259 "raid_level": "raid5f", 00:33:58.259 "superblock": false, 00:33:58.259 "num_base_bdevs": 4, 00:33:58.259 "num_base_bdevs_discovered": 3, 00:33:58.259 "num_base_bdevs_operational": 3, 00:33:58.259 "base_bdevs_list": [ 00:33:58.259 { 00:33:58.259 "name": null, 00:33:58.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:58.259 "is_configured": false, 00:33:58.259 "data_offset": 0, 00:33:58.259 "data_size": 65536 00:33:58.259 }, 00:33:58.259 { 00:33:58.259 "name": "BaseBdev2", 00:33:58.259 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:33:58.259 "is_configured": true, 00:33:58.259 "data_offset": 0, 00:33:58.259 "data_size": 65536 00:33:58.259 }, 00:33:58.259 { 00:33:58.259 "name": "BaseBdev3", 00:33:58.259 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:33:58.259 "is_configured": true, 00:33:58.259 "data_offset": 0, 00:33:58.259 "data_size": 65536 00:33:58.259 }, 00:33:58.259 { 00:33:58.259 "name": "BaseBdev4", 00:33:58.259 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:33:58.259 "is_configured": true, 00:33:58.259 "data_offset": 0, 00:33:58.259 "data_size": 65536 00:33:58.259 } 00:33:58.259 ] 00:33:58.259 }' 00:33:58.259 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:33:58.259 12:15:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.825 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:58.825 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:33:58.825 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:33:58.825 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:33:58.825 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:33:58.825 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:33:58.825 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:59.084 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:59.084 "name": "raid_bdev1", 00:33:59.084 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:33:59.084 "strip_size_kb": 64, 00:33:59.084 "state": "online", 00:33:59.084 "raid_level": "raid5f", 00:33:59.084 "superblock": false, 00:33:59.084 "num_base_bdevs": 4, 00:33:59.084 "num_base_bdevs_discovered": 3, 00:33:59.084 "num_base_bdevs_operational": 3, 00:33:59.084 "base_bdevs_list": [ 00:33:59.084 { 00:33:59.084 "name": null, 00:33:59.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:59.084 "is_configured": false, 00:33:59.084 "data_offset": 0, 00:33:59.084 "data_size": 65536 00:33:59.084 }, 00:33:59.084 { 00:33:59.084 "name": "BaseBdev2", 00:33:59.084 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:33:59.084 "is_configured": true, 00:33:59.084 "data_offset": 0, 00:33:59.084 "data_size": 65536 00:33:59.084 }, 00:33:59.084 { 00:33:59.084 "name": "BaseBdev3", 00:33:59.084 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:33:59.084 "is_configured": true, 00:33:59.084 "data_offset": 0, 00:33:59.084 "data_size": 65536 00:33:59.084 }, 00:33:59.084 { 00:33:59.084 "name": "BaseBdev4", 00:33:59.084 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:33:59.084 "is_configured": true, 00:33:59.084 "data_offset": 0, 00:33:59.084 "data_size": 65536 00:33:59.084 } 00:33:59.084 ] 00:33:59.084 }' 00:33:59.084 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:33:59.084 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:33:59.084 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:33:59.341 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:33:59.341 12:15:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:33:59.599 [2024-07-21 12:15:58.236334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:59.599 [2024-07-21 12:15:58.240379] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:33:59.599 [2024-07-21 12:15:58.242684] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:59.599 12:15:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:00.531 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:00.531 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:00.531 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:00.531 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:00.531 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:00.531 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.531 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:00.789 "name": "raid_bdev1", 00:34:00.789 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:00.789 "strip_size_kb": 64, 00:34:00.789 "state": "online", 00:34:00.789 "raid_level": "raid5f", 00:34:00.789 "superblock": false, 00:34:00.789 "num_base_bdevs": 4, 00:34:00.789 "num_base_bdevs_discovered": 4, 00:34:00.789 "num_base_bdevs_operational": 4, 00:34:00.789 "process": { 00:34:00.789 "type": "rebuild", 00:34:00.789 "target": "spare", 00:34:00.789 "progress": { 00:34:00.789 "blocks": 23040, 00:34:00.789 "percent": 11 00:34:00.789 } 00:34:00.789 }, 00:34:00.789 "base_bdevs_list": [ 00:34:00.789 { 00:34:00.789 "name": "spare", 00:34:00.789 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:00.789 "is_configured": true, 00:34:00.789 "data_offset": 0, 00:34:00.789 "data_size": 65536 00:34:00.789 }, 00:34:00.789 { 00:34:00.789 "name": "BaseBdev2", 00:34:00.789 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:00.789 "is_configured": true, 00:34:00.789 "data_offset": 0, 00:34:00.789 "data_size": 65536 00:34:00.789 }, 00:34:00.789 { 00:34:00.789 "name": "BaseBdev3", 00:34:00.789 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:00.789 "is_configured": true, 00:34:00.789 "data_offset": 0, 00:34:00.789 "data_size": 65536 00:34:00.789 }, 00:34:00.789 { 00:34:00.789 "name": "BaseBdev4", 00:34:00.789 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:00.789 "is_configured": true, 00:34:00.789 "data_offset": 0, 00:34:00.789 "data_size": 65536 00:34:00.789 } 00:34:00.789 ] 00:34:00.789 }' 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@665 -- # '[' false = true ']' 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@705 -- # local timeout=1244 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:00.789 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:01.046 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:01.046 "name": "raid_bdev1", 00:34:01.046 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:01.046 "strip_size_kb": 64, 00:34:01.046 "state": "online", 00:34:01.046 "raid_level": "raid5f", 00:34:01.046 "superblock": false, 00:34:01.046 "num_base_bdevs": 4, 00:34:01.046 "num_base_bdevs_discovered": 4, 00:34:01.046 "num_base_bdevs_operational": 4, 00:34:01.046 "process": { 00:34:01.046 "type": "rebuild", 00:34:01.046 "target": "spare", 00:34:01.046 "progress": { 00:34:01.046 "blocks": 28800, 00:34:01.046 "percent": 14 00:34:01.046 } 00:34:01.046 }, 00:34:01.046 "base_bdevs_list": [ 00:34:01.046 { 00:34:01.046 "name": "spare", 00:34:01.046 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:01.046 "is_configured": true, 00:34:01.046 "data_offset": 0, 00:34:01.046 "data_size": 65536 00:34:01.046 }, 00:34:01.046 { 00:34:01.046 "name": "BaseBdev2", 00:34:01.046 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:01.046 "is_configured": true, 00:34:01.046 "data_offset": 0, 00:34:01.046 "data_size": 65536 00:34:01.046 }, 00:34:01.046 { 00:34:01.046 "name": "BaseBdev3", 00:34:01.046 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:01.046 "is_configured": true, 00:34:01.046 "data_offset": 0, 00:34:01.046 "data_size": 65536 00:34:01.046 }, 00:34:01.046 { 00:34:01.046 "name": "BaseBdev4", 00:34:01.046 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:01.046 "is_configured": true, 00:34:01.046 "data_offset": 0, 00:34:01.046 "data_size": 65536 00:34:01.046 } 00:34:01.046 ] 00:34:01.046 }' 00:34:01.046 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:01.046 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:01.046 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:01.046 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:01.046 12:15:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:02.465 12:16:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:02.465 12:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:02.465 "name": "raid_bdev1", 00:34:02.465 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:02.465 "strip_size_kb": 64, 00:34:02.465 "state": "online", 00:34:02.465 "raid_level": "raid5f", 00:34:02.465 "superblock": false, 00:34:02.465 "num_base_bdevs": 4, 00:34:02.465 "num_base_bdevs_discovered": 4, 00:34:02.465 "num_base_bdevs_operational": 4, 00:34:02.465 "process": { 00:34:02.465 "type": "rebuild", 00:34:02.465 "target": "spare", 00:34:02.465 "progress": { 00:34:02.465 "blocks": 53760, 00:34:02.465 "percent": 27 00:34:02.465 } 00:34:02.465 }, 00:34:02.465 "base_bdevs_list": [ 00:34:02.465 { 00:34:02.465 "name": "spare", 00:34:02.465 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:02.465 "is_configured": true, 00:34:02.465 "data_offset": 0, 00:34:02.465 "data_size": 65536 00:34:02.465 }, 00:34:02.465 { 00:34:02.465 "name": "BaseBdev2", 00:34:02.465 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:02.465 "is_configured": true, 00:34:02.465 "data_offset": 0, 00:34:02.465 "data_size": 65536 00:34:02.465 }, 00:34:02.465 { 00:34:02.465 "name": "BaseBdev3", 00:34:02.465 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:02.465 "is_configured": true, 00:34:02.465 "data_offset": 0, 00:34:02.465 "data_size": 65536 00:34:02.465 }, 00:34:02.465 { 00:34:02.465 "name": "BaseBdev4", 00:34:02.465 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:02.465 "is_configured": true, 00:34:02.465 "data_offset": 0, 00:34:02.465 "data_size": 65536 00:34:02.465 } 00:34:02.465 ] 00:34:02.465 }' 00:34:02.465 12:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:02.465 12:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:02.465 12:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:02.465 12:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:02.465 12:16:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:03.399 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:03.657 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:03.657 "name": "raid_bdev1", 00:34:03.657 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:03.657 "strip_size_kb": 64, 00:34:03.657 "state": "online", 00:34:03.657 "raid_level": "raid5f", 00:34:03.657 "superblock": false, 00:34:03.657 "num_base_bdevs": 4, 00:34:03.657 "num_base_bdevs_discovered": 4, 00:34:03.657 "num_base_bdevs_operational": 4, 00:34:03.657 "process": { 00:34:03.657 "type": "rebuild", 00:34:03.657 "target": "spare", 00:34:03.657 "progress": { 00:34:03.657 "blocks": 78720, 00:34:03.657 "percent": 40 00:34:03.657 } 00:34:03.657 }, 00:34:03.657 "base_bdevs_list": [ 00:34:03.657 { 00:34:03.657 "name": "spare", 00:34:03.657 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:03.657 "is_configured": true, 00:34:03.657 "data_offset": 0, 00:34:03.657 "data_size": 65536 00:34:03.657 }, 00:34:03.657 { 00:34:03.657 "name": "BaseBdev2", 00:34:03.657 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:03.657 "is_configured": true, 00:34:03.657 "data_offset": 0, 00:34:03.657 "data_size": 65536 00:34:03.657 }, 00:34:03.657 { 00:34:03.657 "name": "BaseBdev3", 00:34:03.657 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:03.657 "is_configured": true, 00:34:03.657 "data_offset": 0, 00:34:03.657 "data_size": 65536 00:34:03.657 }, 00:34:03.657 { 00:34:03.657 "name": "BaseBdev4", 00:34:03.657 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:03.657 "is_configured": true, 00:34:03.657 "data_offset": 0, 00:34:03.657 "data_size": 65536 00:34:03.657 } 00:34:03.657 ] 00:34:03.657 }' 00:34:03.657 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:03.657 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:03.657 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:03.916 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:03.916 12:16:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:04.848 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:05.106 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:05.106 "name": "raid_bdev1", 00:34:05.106 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:05.106 "strip_size_kb": 64, 00:34:05.106 "state": "online", 00:34:05.106 "raid_level": "raid5f", 00:34:05.106 "superblock": false, 00:34:05.106 "num_base_bdevs": 4, 00:34:05.106 "num_base_bdevs_discovered": 4, 00:34:05.106 "num_base_bdevs_operational": 4, 00:34:05.106 "process": { 00:34:05.106 "type": "rebuild", 00:34:05.106 "target": "spare", 00:34:05.106 "progress": { 00:34:05.106 "blocks": 105600, 00:34:05.106 "percent": 53 00:34:05.106 } 00:34:05.106 }, 00:34:05.106 "base_bdevs_list": [ 00:34:05.106 { 00:34:05.106 "name": "spare", 00:34:05.106 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:05.106 "is_configured": true, 00:34:05.106 "data_offset": 0, 00:34:05.106 "data_size": 65536 00:34:05.106 }, 00:34:05.106 { 00:34:05.106 "name": "BaseBdev2", 00:34:05.106 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:05.106 "is_configured": true, 00:34:05.106 "data_offset": 0, 00:34:05.106 "data_size": 65536 00:34:05.106 }, 00:34:05.106 { 00:34:05.106 "name": "BaseBdev3", 00:34:05.106 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:05.106 "is_configured": true, 00:34:05.106 "data_offset": 0, 00:34:05.106 "data_size": 65536 00:34:05.106 }, 00:34:05.106 { 00:34:05.106 "name": "BaseBdev4", 00:34:05.106 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:05.106 "is_configured": true, 00:34:05.106 "data_offset": 0, 00:34:05.106 "data_size": 65536 00:34:05.106 } 00:34:05.106 ] 00:34:05.106 }' 00:34:05.106 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:05.106 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:05.106 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:05.106 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:05.106 12:16:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:06.037 12:16:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:06.294 12:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:06.294 "name": "raid_bdev1", 00:34:06.294 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:06.294 "strip_size_kb": 64, 00:34:06.294 "state": "online", 00:34:06.294 "raid_level": "raid5f", 00:34:06.294 "superblock": false, 00:34:06.294 "num_base_bdevs": 4, 00:34:06.294 "num_base_bdevs_discovered": 4, 00:34:06.294 "num_base_bdevs_operational": 4, 00:34:06.294 "process": { 00:34:06.294 "type": "rebuild", 00:34:06.294 "target": "spare", 00:34:06.294 "progress": { 00:34:06.294 "blocks": 130560, 00:34:06.294 "percent": 66 00:34:06.294 } 00:34:06.294 }, 00:34:06.294 "base_bdevs_list": [ 00:34:06.294 { 00:34:06.294 "name": "spare", 00:34:06.294 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:06.294 "is_configured": true, 00:34:06.294 "data_offset": 0, 00:34:06.294 "data_size": 65536 00:34:06.294 }, 00:34:06.294 { 00:34:06.294 "name": "BaseBdev2", 00:34:06.294 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:06.294 "is_configured": true, 00:34:06.294 "data_offset": 0, 00:34:06.294 "data_size": 65536 00:34:06.294 }, 00:34:06.294 { 00:34:06.294 "name": "BaseBdev3", 00:34:06.294 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:06.294 "is_configured": true, 00:34:06.294 "data_offset": 0, 00:34:06.294 "data_size": 65536 00:34:06.294 }, 00:34:06.294 { 00:34:06.294 "name": "BaseBdev4", 00:34:06.294 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:06.294 "is_configured": true, 00:34:06.294 "data_offset": 0, 00:34:06.294 "data_size": 65536 00:34:06.294 } 00:34:06.294 ] 00:34:06.294 }' 00:34:06.294 12:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:06.551 12:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:06.551 12:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:06.551 12:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:06.551 12:16:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:07.483 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:07.483 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:07.483 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:07.484 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:07.484 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:07.484 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:07.484 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:07.484 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:07.741 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:07.741 "name": "raid_bdev1", 00:34:07.741 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:07.741 "strip_size_kb": 64, 00:34:07.741 "state": "online", 00:34:07.741 "raid_level": "raid5f", 00:34:07.741 "superblock": false, 00:34:07.741 "num_base_bdevs": 4, 00:34:07.741 "num_base_bdevs_discovered": 4, 00:34:07.741 "num_base_bdevs_operational": 4, 00:34:07.741 "process": { 00:34:07.741 "type": "rebuild", 00:34:07.741 "target": "spare", 00:34:07.741 "progress": { 00:34:07.741 "blocks": 155520, 00:34:07.741 "percent": 79 00:34:07.741 } 00:34:07.741 }, 00:34:07.741 "base_bdevs_list": [ 00:34:07.741 { 00:34:07.741 "name": "spare", 00:34:07.741 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:07.741 "is_configured": true, 00:34:07.741 "data_offset": 0, 00:34:07.741 "data_size": 65536 00:34:07.741 }, 00:34:07.741 { 00:34:07.741 "name": "BaseBdev2", 00:34:07.741 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:07.741 "is_configured": true, 00:34:07.741 "data_offset": 0, 00:34:07.741 "data_size": 65536 00:34:07.741 }, 00:34:07.741 { 00:34:07.741 "name": "BaseBdev3", 00:34:07.741 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:07.741 "is_configured": true, 00:34:07.741 "data_offset": 0, 00:34:07.741 "data_size": 65536 00:34:07.741 }, 00:34:07.741 { 00:34:07.741 "name": "BaseBdev4", 00:34:07.741 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:07.741 "is_configured": true, 00:34:07.741 "data_offset": 0, 00:34:07.741 "data_size": 65536 00:34:07.741 } 00:34:07.741 ] 00:34:07.741 }' 00:34:07.741 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:07.741 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:07.741 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:07.741 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:07.741 12:16:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:09.115 "name": "raid_bdev1", 00:34:09.115 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:09.115 "strip_size_kb": 64, 00:34:09.115 "state": "online", 00:34:09.115 "raid_level": "raid5f", 00:34:09.115 "superblock": false, 00:34:09.115 "num_base_bdevs": 4, 00:34:09.115 "num_base_bdevs_discovered": 4, 00:34:09.115 "num_base_bdevs_operational": 4, 00:34:09.115 "process": { 00:34:09.115 "type": "rebuild", 00:34:09.115 "target": "spare", 00:34:09.115 "progress": { 00:34:09.115 "blocks": 180480, 00:34:09.115 "percent": 91 00:34:09.115 } 00:34:09.115 }, 00:34:09.115 "base_bdevs_list": [ 00:34:09.115 { 00:34:09.115 "name": "spare", 00:34:09.115 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:09.115 "is_configured": true, 00:34:09.115 "data_offset": 0, 00:34:09.115 "data_size": 65536 00:34:09.115 }, 00:34:09.115 { 00:34:09.115 "name": "BaseBdev2", 00:34:09.115 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:09.115 "is_configured": true, 00:34:09.115 "data_offset": 0, 00:34:09.115 "data_size": 65536 00:34:09.115 }, 00:34:09.115 { 00:34:09.115 "name": "BaseBdev3", 00:34:09.115 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:09.115 "is_configured": true, 00:34:09.115 "data_offset": 0, 00:34:09.115 "data_size": 65536 00:34:09.115 }, 00:34:09.115 { 00:34:09.115 "name": "BaseBdev4", 00:34:09.115 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:09.115 "is_configured": true, 00:34:09.115 "data_offset": 0, 00:34:09.115 "data_size": 65536 00:34:09.115 } 00:34:09.115 ] 00:34:09.115 }' 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:09.115 12:16:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:10.051 [2024-07-21 12:16:08.610305] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:10.051 [2024-07-21 12:16:08.610514] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:10.051 [2024-07-21 12:16:08.610750] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.051 12:16:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.309 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:10.309 "name": "raid_bdev1", 00:34:10.309 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:10.309 "strip_size_kb": 64, 00:34:10.309 "state": "online", 00:34:10.309 "raid_level": "raid5f", 00:34:10.309 "superblock": false, 00:34:10.309 "num_base_bdevs": 4, 00:34:10.309 "num_base_bdevs_discovered": 4, 00:34:10.309 "num_base_bdevs_operational": 4, 00:34:10.309 "base_bdevs_list": [ 00:34:10.309 { 00:34:10.309 "name": "spare", 00:34:10.309 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:10.309 "is_configured": true, 00:34:10.309 "data_offset": 0, 00:34:10.309 "data_size": 65536 00:34:10.309 }, 00:34:10.309 { 00:34:10.309 "name": "BaseBdev2", 00:34:10.309 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:10.309 "is_configured": true, 00:34:10.309 "data_offset": 0, 00:34:10.309 "data_size": 65536 00:34:10.309 }, 00:34:10.309 { 00:34:10.309 "name": "BaseBdev3", 00:34:10.309 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:10.310 "is_configured": true, 00:34:10.310 "data_offset": 0, 00:34:10.310 "data_size": 65536 00:34:10.310 }, 00:34:10.310 { 00:34:10.310 "name": "BaseBdev4", 00:34:10.310 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:10.310 "is_configured": true, 00:34:10.310 "data_offset": 0, 00:34:10.310 "data_size": 65536 00:34:10.310 } 00:34:10.310 ] 00:34:10.310 }' 00:34:10.310 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # break 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.569 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:10.828 "name": "raid_bdev1", 00:34:10.828 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:10.828 "strip_size_kb": 64, 00:34:10.828 "state": "online", 00:34:10.828 "raid_level": "raid5f", 00:34:10.828 "superblock": false, 00:34:10.828 "num_base_bdevs": 4, 00:34:10.828 "num_base_bdevs_discovered": 4, 00:34:10.828 "num_base_bdevs_operational": 4, 00:34:10.828 "base_bdevs_list": [ 00:34:10.828 { 00:34:10.828 "name": "spare", 00:34:10.828 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:10.828 "is_configured": true, 00:34:10.828 "data_offset": 0, 00:34:10.828 "data_size": 65536 00:34:10.828 }, 00:34:10.828 { 00:34:10.828 "name": "BaseBdev2", 00:34:10.828 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:10.828 "is_configured": true, 00:34:10.828 "data_offset": 0, 00:34:10.828 "data_size": 65536 00:34:10.828 }, 00:34:10.828 { 00:34:10.828 "name": "BaseBdev3", 00:34:10.828 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:10.828 "is_configured": true, 00:34:10.828 "data_offset": 0, 00:34:10.828 "data_size": 65536 00:34:10.828 }, 00:34:10.828 { 00:34:10.828 "name": "BaseBdev4", 00:34:10.828 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:10.828 "is_configured": true, 00:34:10.828 "data_offset": 0, 00:34:10.828 "data_size": 65536 00:34:10.828 } 00:34:10.828 ] 00:34:10.828 }' 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:10.828 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.087 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:11.087 "name": "raid_bdev1", 00:34:11.087 "uuid": "a025c9f9-2691-43ee-ad91-9639ef007232", 00:34:11.087 "strip_size_kb": 64, 00:34:11.087 "state": "online", 00:34:11.087 "raid_level": "raid5f", 00:34:11.087 "superblock": false, 00:34:11.087 "num_base_bdevs": 4, 00:34:11.087 "num_base_bdevs_discovered": 4, 00:34:11.087 "num_base_bdevs_operational": 4, 00:34:11.087 "base_bdevs_list": [ 00:34:11.087 { 00:34:11.087 "name": "spare", 00:34:11.087 "uuid": "20b21a84-089b-5b63-aa9c-22e408c53321", 00:34:11.087 "is_configured": true, 00:34:11.087 "data_offset": 0, 00:34:11.087 "data_size": 65536 00:34:11.087 }, 00:34:11.087 { 00:34:11.087 "name": "BaseBdev2", 00:34:11.087 "uuid": "2518082c-ed6f-5b10-828f-89ce7c65162a", 00:34:11.087 "is_configured": true, 00:34:11.087 "data_offset": 0, 00:34:11.087 "data_size": 65536 00:34:11.087 }, 00:34:11.087 { 00:34:11.087 "name": "BaseBdev3", 00:34:11.087 "uuid": "cb8994f4-c918-545d-8ef3-536a7522e41c", 00:34:11.087 "is_configured": true, 00:34:11.087 "data_offset": 0, 00:34:11.087 "data_size": 65536 00:34:11.087 }, 00:34:11.087 { 00:34:11.087 "name": "BaseBdev4", 00:34:11.087 "uuid": "1d68abd9-15b0-5ebe-b16a-90f6dc448fff", 00:34:11.087 "is_configured": true, 00:34:11.087 "data_offset": 0, 00:34:11.087 "data_size": 65536 00:34:11.087 } 00:34:11.087 ] 00:34:11.087 }' 00:34:11.087 12:16:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:11.087 12:16:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.655 12:16:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:11.914 [2024-07-21 12:16:10.742955] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:11.914 [2024-07-21 12:16:10.743118] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:11.914 [2024-07-21 12:16:10.743353] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:11.914 [2024-07-21 12:16:10.743588] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:11.914 [2024-07-21 12:16:10.743709] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:34:11.914 12:16:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:11.914 12:16:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # jq length 00:34:12.172 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:12.173 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:12.432 /dev/nbd0 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:12.432 1+0 records in 00:34:12.432 1+0 records out 00:34:12.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323957 s, 12.6 MB/s 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:12.432 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:34:12.691 /dev/nbd1 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@865 -- # local i 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # break 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:12.691 1+0 records in 00:34:12.691 1+0 records out 00:34:12.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588062 s, 7.0 MB/s 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@882 -- # size=4096 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # return 0 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:12.691 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:34:12.950 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:34:12.950 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:12.950 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:12.950 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:12.950 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:12.950 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:12.951 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:13.210 12:16:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@742 -- # '[' false = true ']' 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@782 -- # killprocess 167317 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@946 -- # '[' -z 167317 ']' 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # kill -0 167317 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # uname 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:13.210 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 167317 00:34:13.470 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:13.470 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:13.470 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # echo 'killing process with pid 167317' 00:34:13.470 killing process with pid 167317 00:34:13.470 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@965 -- # kill 167317 00:34:13.470 Received shutdown signal, test time was about 60.000000 seconds 00:34:13.470 00:34:13.470 Latency(us) 00:34:13.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.470 =================================================================================================================== 00:34:13.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:13.470 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # wait 167317 00:34:13.470 [2024-07-21 12:16:12.089519] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:13.470 [2024-07-21 12:16:12.145859] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # return 0 00:34:13.729 00:34:13.729 real 0m24.723s 00:34:13.729 user 0m36.673s 00:34:13.729 sys 0m3.056s 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.729 ************************************ 00:34:13.729 END TEST raid5f_rebuild_test 00:34:13.729 ************************************ 00:34:13.729 12:16:12 bdev_raid -- bdev/bdev_raid.sh@891 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:34:13.729 12:16:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:34:13.729 12:16:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:13.729 12:16:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:13.729 ************************************ 00:34:13.729 START TEST raid5f_rebuild_test_sb 00:34:13.729 ************************************ 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid5f 4 true false true 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@568 -- # local raid_level=raid5f 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=4 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local verify=true 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev3 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # echo BaseBdev4 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local strip_size 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local create_arg 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local data_offset 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@580 -- # '[' raid5f '!=' raid1 ']' 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' false = true ']' 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@585 -- # strip_size=64 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # create_arg+=' -z 64' 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # raid_pid=167923 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # waitforlisten 167923 /var/tmp/spdk-raid.sock 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@827 -- # '[' -z 167923 ']' 00:34:13.729 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:13.730 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:13.730 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:13.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:13.730 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:13.730 12:16:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:13.989 [2024-07-21 12:16:12.616753] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:13.989 [2024-07-21 12:16:12.617242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid167923 ] 00:34:13.989 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:13.989 Zero copy mechanism will not be used. 00:34:13.989 [2024-07-21 12:16:12.780843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.253 [2024-07-21 12:16:12.859085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.253 [2024-07-21 12:16:12.932391] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:14.819 12:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:14.819 12:16:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # return 0 00:34:14.819 12:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:14.819 12:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:15.077 BaseBdev1_malloc 00:34:15.077 12:16:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:15.334 [2024-07-21 12:16:14.009761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:15.334 [2024-07-21 12:16:14.010005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:15.334 [2024-07-21 12:16:14.010212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:34:15.334 [2024-07-21 12:16:14.010374] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:15.334 [2024-07-21 12:16:14.012934] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:15.334 [2024-07-21 12:16:14.013120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:15.334 BaseBdev1 00:34:15.334 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:15.334 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:15.592 BaseBdev2_malloc 00:34:15.592 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:15.592 [2024-07-21 12:16:14.451587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:15.592 [2024-07-21 12:16:14.451830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:15.592 [2024-07-21 12:16:14.452007] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:34:15.592 [2024-07-21 12:16:14.452156] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:15.592 [2024-07-21 12:16:14.454599] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:15.592 [2024-07-21 12:16:14.454797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:15.592 BaseBdev2 00:34:15.850 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:15.850 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:15.850 BaseBdev3_malloc 00:34:15.850 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:16.108 [2024-07-21 12:16:14.860930] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:16.108 [2024-07-21 12:16:14.861151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.108 [2024-07-21 12:16:14.861266] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:34:16.108 [2024-07-21 12:16:14.861548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.108 [2024-07-21 12:16:14.863771] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.108 [2024-07-21 12:16:14.863961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:16.108 BaseBdev3 00:34:16.108 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:34:16.108 12:16:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:34:16.366 BaseBdev4_malloc 00:34:16.366 12:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:34:16.624 [2024-07-21 12:16:15.266629] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:34:16.624 [2024-07-21 12:16:15.266852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.624 [2024-07-21 12:16:15.266928] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:34:16.624 [2024-07-21 12:16:15.267188] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.624 [2024-07-21 12:16:15.269736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.624 [2024-07-21 12:16:15.269906] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:34:16.624 BaseBdev4 00:34:16.624 12:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:34:16.624 spare_malloc 00:34:16.624 12:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:16.881 spare_delay 00:34:16.881 12:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:17.150 [2024-07-21 12:16:15.847914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:17.150 [2024-07-21 12:16:15.848122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:17.150 [2024-07-21 12:16:15.848192] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:34:17.150 [2024-07-21 12:16:15.848329] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:17.150 [2024-07-21 12:16:15.850760] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:17.150 [2024-07-21 12:16:15.850955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:17.150 spare 00:34:17.150 12:16:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:34:17.422 [2024-07-21 12:16:16.052051] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:17.422 [2024-07-21 12:16:16.054231] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:17.422 [2024-07-21 12:16:16.054439] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:17.422 [2024-07-21 12:16:16.054539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:17.422 [2024-07-21 12:16:16.054887] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:34:17.422 [2024-07-21 12:16:16.054953] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:17.422 [2024-07-21 12:16:16.055198] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:34:17.422 [2024-07-21 12:16:16.056133] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:34:17.422 [2024-07-21 12:16:16.056274] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:34:17.422 [2024-07-21 12:16:16.056590] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:17.422 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:17.422 "name": "raid_bdev1", 00:34:17.422 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:17.422 "strip_size_kb": 64, 00:34:17.422 "state": "online", 00:34:17.422 "raid_level": "raid5f", 00:34:17.422 "superblock": true, 00:34:17.422 "num_base_bdevs": 4, 00:34:17.423 "num_base_bdevs_discovered": 4, 00:34:17.423 "num_base_bdevs_operational": 4, 00:34:17.423 "base_bdevs_list": [ 00:34:17.423 { 00:34:17.423 "name": "BaseBdev1", 00:34:17.423 "uuid": "247210f7-d79c-5ebe-9000-75f6bcdb9660", 00:34:17.423 "is_configured": true, 00:34:17.423 "data_offset": 2048, 00:34:17.423 "data_size": 63488 00:34:17.423 }, 00:34:17.423 { 00:34:17.423 "name": "BaseBdev2", 00:34:17.423 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:17.423 "is_configured": true, 00:34:17.423 "data_offset": 2048, 00:34:17.423 "data_size": 63488 00:34:17.423 }, 00:34:17.423 { 00:34:17.423 "name": "BaseBdev3", 00:34:17.423 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:17.423 "is_configured": true, 00:34:17.423 "data_offset": 2048, 00:34:17.423 "data_size": 63488 00:34:17.423 }, 00:34:17.423 { 00:34:17.423 "name": "BaseBdev4", 00:34:17.423 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:17.423 "is_configured": true, 00:34:17.423 "data_offset": 2048, 00:34:17.423 "data_size": 63488 00:34:17.423 } 00:34:17.423 ] 00:34:17.423 }' 00:34:17.423 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:17.423 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:18.357 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:34:18.357 12:16:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:34:18.357 [2024-07-21 12:16:17.096955] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:18.357 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=190464 00:34:18.357 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:18.357 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@618 -- # data_offset=2048 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:18.614 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:18.871 [2024-07-21 12:16:17.602900] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:18.871 /dev/nbd0 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:18.871 1+0 records in 00:34:18.871 1+0 records out 00:34:18.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436593 s, 9.4 MB/s 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # '[' raid5f = raid5f ']' 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # write_unit_size=384 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # echo 192 00:34:18.871 12:16:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:34:19.450 496+0 records in 00:34:19.450 496+0 records out 00:34:19.450 97517568 bytes (98 MB, 93 MiB) copied, 0.556267 s, 175 MB/s 00:34:19.450 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:34:19.450 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:19.450 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:19.450 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:19.450 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:19.450 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:19.450 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:19.708 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:34:19.708 [2024-07-21 12:16:18.434258] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.966 [2024-07-21 12:16:18.693935] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.966 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:20.222 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:20.222 "name": "raid_bdev1", 00:34:20.222 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:20.222 "strip_size_kb": 64, 00:34:20.222 "state": "online", 00:34:20.222 "raid_level": "raid5f", 00:34:20.222 "superblock": true, 00:34:20.222 "num_base_bdevs": 4, 00:34:20.222 "num_base_bdevs_discovered": 3, 00:34:20.222 "num_base_bdevs_operational": 3, 00:34:20.222 "base_bdevs_list": [ 00:34:20.222 { 00:34:20.222 "name": null, 00:34:20.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.222 "is_configured": false, 00:34:20.222 "data_offset": 2048, 00:34:20.222 "data_size": 63488 00:34:20.222 }, 00:34:20.222 { 00:34:20.222 "name": "BaseBdev2", 00:34:20.222 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:20.222 "is_configured": true, 00:34:20.222 "data_offset": 2048, 00:34:20.222 "data_size": 63488 00:34:20.222 }, 00:34:20.222 { 00:34:20.222 "name": "BaseBdev3", 00:34:20.222 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:20.222 "is_configured": true, 00:34:20.222 "data_offset": 2048, 00:34:20.222 "data_size": 63488 00:34:20.222 }, 00:34:20.222 { 00:34:20.222 "name": "BaseBdev4", 00:34:20.222 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:20.222 "is_configured": true, 00:34:20.222 "data_offset": 2048, 00:34:20.222 "data_size": 63488 00:34:20.222 } 00:34:20.222 ] 00:34:20.222 }' 00:34:20.222 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:20.222 12:16:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:20.787 12:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:21.045 [2024-07-21 12:16:19.830155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:21.045 [2024-07-21 12:16:19.835792] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a710 00:34:21.045 [2024-07-21 12:16:19.838457] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:21.045 12:16:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # sleep 1 00:34:22.419 12:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:22.419 12:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:22.419 12:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:22.419 12:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:22.419 12:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:22.419 12:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.419 12:16:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.419 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:22.419 "name": "raid_bdev1", 00:34:22.419 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:22.419 "strip_size_kb": 64, 00:34:22.419 "state": "online", 00:34:22.419 "raid_level": "raid5f", 00:34:22.419 "superblock": true, 00:34:22.419 "num_base_bdevs": 4, 00:34:22.419 "num_base_bdevs_discovered": 4, 00:34:22.419 "num_base_bdevs_operational": 4, 00:34:22.419 "process": { 00:34:22.419 "type": "rebuild", 00:34:22.419 "target": "spare", 00:34:22.419 "progress": { 00:34:22.419 "blocks": 23040, 00:34:22.419 "percent": 12 00:34:22.419 } 00:34:22.419 }, 00:34:22.419 "base_bdevs_list": [ 00:34:22.419 { 00:34:22.419 "name": "spare", 00:34:22.419 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:22.419 "is_configured": true, 00:34:22.419 "data_offset": 2048, 00:34:22.420 "data_size": 63488 00:34:22.420 }, 00:34:22.420 { 00:34:22.420 "name": "BaseBdev2", 00:34:22.420 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:22.420 "is_configured": true, 00:34:22.420 "data_offset": 2048, 00:34:22.420 "data_size": 63488 00:34:22.420 }, 00:34:22.420 { 00:34:22.420 "name": "BaseBdev3", 00:34:22.420 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:22.420 "is_configured": true, 00:34:22.420 "data_offset": 2048, 00:34:22.420 "data_size": 63488 00:34:22.420 }, 00:34:22.420 { 00:34:22.420 "name": "BaseBdev4", 00:34:22.420 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:22.420 "is_configured": true, 00:34:22.420 "data_offset": 2048, 00:34:22.420 "data_size": 63488 00:34:22.420 } 00:34:22.420 ] 00:34:22.420 }' 00:34:22.420 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:22.420 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:22.420 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:22.420 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:22.420 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:22.678 [2024-07-21 12:16:21.448100] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:22.678 [2024-07-21 12:16:21.451074] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:22.678 [2024-07-21 12:16:21.451294] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:22.678 [2024-07-21 12:16:21.451426] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:22.678 [2024-07-21 12:16:21.451471] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:22.678 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.937 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:22.937 "name": "raid_bdev1", 00:34:22.937 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:22.937 "strip_size_kb": 64, 00:34:22.937 "state": "online", 00:34:22.937 "raid_level": "raid5f", 00:34:22.937 "superblock": true, 00:34:22.937 "num_base_bdevs": 4, 00:34:22.937 "num_base_bdevs_discovered": 3, 00:34:22.937 "num_base_bdevs_operational": 3, 00:34:22.937 "base_bdevs_list": [ 00:34:22.937 { 00:34:22.937 "name": null, 00:34:22.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.937 "is_configured": false, 00:34:22.937 "data_offset": 2048, 00:34:22.937 "data_size": 63488 00:34:22.937 }, 00:34:22.937 { 00:34:22.937 "name": "BaseBdev2", 00:34:22.937 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:22.937 "is_configured": true, 00:34:22.937 "data_offset": 2048, 00:34:22.937 "data_size": 63488 00:34:22.937 }, 00:34:22.937 { 00:34:22.937 "name": "BaseBdev3", 00:34:22.937 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:22.937 "is_configured": true, 00:34:22.937 "data_offset": 2048, 00:34:22.937 "data_size": 63488 00:34:22.937 }, 00:34:22.937 { 00:34:22.937 "name": "BaseBdev4", 00:34:22.937 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:22.937 "is_configured": true, 00:34:22.937 "data_offset": 2048, 00:34:22.937 "data_size": 63488 00:34:22.937 } 00:34:22.937 ] 00:34:22.937 }' 00:34:22.937 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:22.937 12:16:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:23.504 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:23.504 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:23.504 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:23.504 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:23.504 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:23.504 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:23.504 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:23.762 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:23.762 "name": "raid_bdev1", 00:34:23.762 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:23.762 "strip_size_kb": 64, 00:34:23.762 "state": "online", 00:34:23.762 "raid_level": "raid5f", 00:34:23.762 "superblock": true, 00:34:23.762 "num_base_bdevs": 4, 00:34:23.762 "num_base_bdevs_discovered": 3, 00:34:23.762 "num_base_bdevs_operational": 3, 00:34:23.762 "base_bdevs_list": [ 00:34:23.762 { 00:34:23.762 "name": null, 00:34:23.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:23.762 "is_configured": false, 00:34:23.762 "data_offset": 2048, 00:34:23.762 "data_size": 63488 00:34:23.762 }, 00:34:23.762 { 00:34:23.762 "name": "BaseBdev2", 00:34:23.762 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:23.762 "is_configured": true, 00:34:23.762 "data_offset": 2048, 00:34:23.762 "data_size": 63488 00:34:23.762 }, 00:34:23.762 { 00:34:23.762 "name": "BaseBdev3", 00:34:23.762 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:23.762 "is_configured": true, 00:34:23.762 "data_offset": 2048, 00:34:23.762 "data_size": 63488 00:34:23.762 }, 00:34:23.762 { 00:34:23.762 "name": "BaseBdev4", 00:34:23.762 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:23.762 "is_configured": true, 00:34:23.762 "data_offset": 2048, 00:34:23.762 "data_size": 63488 00:34:23.762 } 00:34:23.762 ] 00:34:23.762 }' 00:34:23.762 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:23.762 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:23.762 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:24.020 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:24.020 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:24.020 [2024-07-21 12:16:22.859793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:24.020 [2024-07-21 12:16:22.864669] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002a8b0 00:34:24.020 [2024-07-21 12:16:22.867013] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:24.020 12:16:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # sleep 1 00:34:25.393 12:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:25.393 12:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:25.393 12:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:25.393 12:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:25.393 12:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:25.393 12:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.393 12:16:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:25.393 "name": "raid_bdev1", 00:34:25.393 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:25.393 "strip_size_kb": 64, 00:34:25.393 "state": "online", 00:34:25.393 "raid_level": "raid5f", 00:34:25.393 "superblock": true, 00:34:25.393 "num_base_bdevs": 4, 00:34:25.393 "num_base_bdevs_discovered": 4, 00:34:25.393 "num_base_bdevs_operational": 4, 00:34:25.393 "process": { 00:34:25.393 "type": "rebuild", 00:34:25.393 "target": "spare", 00:34:25.393 "progress": { 00:34:25.393 "blocks": 23040, 00:34:25.393 "percent": 12 00:34:25.393 } 00:34:25.393 }, 00:34:25.393 "base_bdevs_list": [ 00:34:25.393 { 00:34:25.393 "name": "spare", 00:34:25.393 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:25.393 "is_configured": true, 00:34:25.393 "data_offset": 2048, 00:34:25.393 "data_size": 63488 00:34:25.393 }, 00:34:25.393 { 00:34:25.393 "name": "BaseBdev2", 00:34:25.393 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:25.393 "is_configured": true, 00:34:25.393 "data_offset": 2048, 00:34:25.393 "data_size": 63488 00:34:25.393 }, 00:34:25.393 { 00:34:25.393 "name": "BaseBdev3", 00:34:25.393 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:25.393 "is_configured": true, 00:34:25.393 "data_offset": 2048, 00:34:25.393 "data_size": 63488 00:34:25.393 }, 00:34:25.393 { 00:34:25.393 "name": "BaseBdev4", 00:34:25.393 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:25.393 "is_configured": true, 00:34:25.393 "data_offset": 2048, 00:34:25.393 "data_size": 63488 00:34:25.393 } 00:34:25.393 ] 00:34:25.393 }' 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:34:25.393 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=4 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@692 -- # '[' raid5f = raid1 ']' 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@705 -- # local timeout=1269 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:25.393 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.651 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:25.651 "name": "raid_bdev1", 00:34:25.651 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:25.651 "strip_size_kb": 64, 00:34:25.651 "state": "online", 00:34:25.651 "raid_level": "raid5f", 00:34:25.651 "superblock": true, 00:34:25.651 "num_base_bdevs": 4, 00:34:25.651 "num_base_bdevs_discovered": 4, 00:34:25.651 "num_base_bdevs_operational": 4, 00:34:25.651 "process": { 00:34:25.651 "type": "rebuild", 00:34:25.651 "target": "spare", 00:34:25.651 "progress": { 00:34:25.651 "blocks": 28800, 00:34:25.651 "percent": 15 00:34:25.651 } 00:34:25.651 }, 00:34:25.651 "base_bdevs_list": [ 00:34:25.651 { 00:34:25.651 "name": "spare", 00:34:25.651 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:25.651 "is_configured": true, 00:34:25.651 "data_offset": 2048, 00:34:25.651 "data_size": 63488 00:34:25.651 }, 00:34:25.651 { 00:34:25.651 "name": "BaseBdev2", 00:34:25.651 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:25.651 "is_configured": true, 00:34:25.651 "data_offset": 2048, 00:34:25.651 "data_size": 63488 00:34:25.651 }, 00:34:25.651 { 00:34:25.651 "name": "BaseBdev3", 00:34:25.651 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:25.651 "is_configured": true, 00:34:25.651 "data_offset": 2048, 00:34:25.651 "data_size": 63488 00:34:25.651 }, 00:34:25.651 { 00:34:25.651 "name": "BaseBdev4", 00:34:25.651 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:25.651 "is_configured": true, 00:34:25.651 "data_offset": 2048, 00:34:25.651 "data_size": 63488 00:34:25.651 } 00:34:25.651 ] 00:34:25.651 }' 00:34:25.651 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:25.909 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:25.909 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:25.909 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:25.909 12:16:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:26.842 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.100 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:27.100 "name": "raid_bdev1", 00:34:27.100 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:27.100 "strip_size_kb": 64, 00:34:27.100 "state": "online", 00:34:27.100 "raid_level": "raid5f", 00:34:27.100 "superblock": true, 00:34:27.100 "num_base_bdevs": 4, 00:34:27.100 "num_base_bdevs_discovered": 4, 00:34:27.100 "num_base_bdevs_operational": 4, 00:34:27.100 "process": { 00:34:27.100 "type": "rebuild", 00:34:27.100 "target": "spare", 00:34:27.100 "progress": { 00:34:27.100 "blocks": 55680, 00:34:27.100 "percent": 29 00:34:27.100 } 00:34:27.100 }, 00:34:27.100 "base_bdevs_list": [ 00:34:27.100 { 00:34:27.100 "name": "spare", 00:34:27.100 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:27.100 "is_configured": true, 00:34:27.100 "data_offset": 2048, 00:34:27.100 "data_size": 63488 00:34:27.100 }, 00:34:27.100 { 00:34:27.100 "name": "BaseBdev2", 00:34:27.100 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:27.100 "is_configured": true, 00:34:27.100 "data_offset": 2048, 00:34:27.100 "data_size": 63488 00:34:27.100 }, 00:34:27.100 { 00:34:27.100 "name": "BaseBdev3", 00:34:27.100 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:27.100 "is_configured": true, 00:34:27.100 "data_offset": 2048, 00:34:27.100 "data_size": 63488 00:34:27.100 }, 00:34:27.100 { 00:34:27.100 "name": "BaseBdev4", 00:34:27.100 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:27.100 "is_configured": true, 00:34:27.100 "data_offset": 2048, 00:34:27.100 "data_size": 63488 00:34:27.100 } 00:34:27.100 ] 00:34:27.100 }' 00:34:27.100 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:27.100 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:27.100 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:27.100 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:27.100 12:16:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:28.475 12:16:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.475 12:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:28.475 "name": "raid_bdev1", 00:34:28.475 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:28.475 "strip_size_kb": 64, 00:34:28.475 "state": "online", 00:34:28.475 "raid_level": "raid5f", 00:34:28.475 "superblock": true, 00:34:28.475 "num_base_bdevs": 4, 00:34:28.475 "num_base_bdevs_discovered": 4, 00:34:28.475 "num_base_bdevs_operational": 4, 00:34:28.475 "process": { 00:34:28.475 "type": "rebuild", 00:34:28.475 "target": "spare", 00:34:28.475 "progress": { 00:34:28.475 "blocks": 82560, 00:34:28.475 "percent": 43 00:34:28.475 } 00:34:28.475 }, 00:34:28.475 "base_bdevs_list": [ 00:34:28.475 { 00:34:28.475 "name": "spare", 00:34:28.475 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:28.475 "is_configured": true, 00:34:28.475 "data_offset": 2048, 00:34:28.475 "data_size": 63488 00:34:28.475 }, 00:34:28.475 { 00:34:28.475 "name": "BaseBdev2", 00:34:28.475 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:28.475 "is_configured": true, 00:34:28.475 "data_offset": 2048, 00:34:28.475 "data_size": 63488 00:34:28.475 }, 00:34:28.475 { 00:34:28.475 "name": "BaseBdev3", 00:34:28.475 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:28.475 "is_configured": true, 00:34:28.475 "data_offset": 2048, 00:34:28.475 "data_size": 63488 00:34:28.475 }, 00:34:28.475 { 00:34:28.475 "name": "BaseBdev4", 00:34:28.475 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:28.475 "is_configured": true, 00:34:28.475 "data_offset": 2048, 00:34:28.475 "data_size": 63488 00:34:28.475 } 00:34:28.475 ] 00:34:28.475 }' 00:34:28.475 12:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:28.475 12:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.475 12:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:28.475 12:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.475 12:16:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.851 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:29.851 "name": "raid_bdev1", 00:34:29.851 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:29.851 "strip_size_kb": 64, 00:34:29.851 "state": "online", 00:34:29.851 "raid_level": "raid5f", 00:34:29.851 "superblock": true, 00:34:29.851 "num_base_bdevs": 4, 00:34:29.852 "num_base_bdevs_discovered": 4, 00:34:29.852 "num_base_bdevs_operational": 4, 00:34:29.852 "process": { 00:34:29.852 "type": "rebuild", 00:34:29.852 "target": "spare", 00:34:29.852 "progress": { 00:34:29.852 "blocks": 107520, 00:34:29.852 "percent": 56 00:34:29.852 } 00:34:29.852 }, 00:34:29.852 "base_bdevs_list": [ 00:34:29.852 { 00:34:29.852 "name": "spare", 00:34:29.852 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:29.852 "is_configured": true, 00:34:29.852 "data_offset": 2048, 00:34:29.852 "data_size": 63488 00:34:29.852 }, 00:34:29.852 { 00:34:29.852 "name": "BaseBdev2", 00:34:29.852 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:29.852 "is_configured": true, 00:34:29.852 "data_offset": 2048, 00:34:29.852 "data_size": 63488 00:34:29.852 }, 00:34:29.852 { 00:34:29.852 "name": "BaseBdev3", 00:34:29.852 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:29.852 "is_configured": true, 00:34:29.852 "data_offset": 2048, 00:34:29.852 "data_size": 63488 00:34:29.852 }, 00:34:29.852 { 00:34:29.852 "name": "BaseBdev4", 00:34:29.852 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:29.852 "is_configured": true, 00:34:29.852 "data_offset": 2048, 00:34:29.852 "data_size": 63488 00:34:29.852 } 00:34:29.852 ] 00:34:29.852 }' 00:34:29.852 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:29.852 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:29.852 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:29.852 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:29.852 12:16:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:31.229 "name": "raid_bdev1", 00:34:31.229 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:31.229 "strip_size_kb": 64, 00:34:31.229 "state": "online", 00:34:31.229 "raid_level": "raid5f", 00:34:31.229 "superblock": true, 00:34:31.229 "num_base_bdevs": 4, 00:34:31.229 "num_base_bdevs_discovered": 4, 00:34:31.229 "num_base_bdevs_operational": 4, 00:34:31.229 "process": { 00:34:31.229 "type": "rebuild", 00:34:31.229 "target": "spare", 00:34:31.229 "progress": { 00:34:31.229 "blocks": 132480, 00:34:31.229 "percent": 69 00:34:31.229 } 00:34:31.229 }, 00:34:31.229 "base_bdevs_list": [ 00:34:31.229 { 00:34:31.229 "name": "spare", 00:34:31.229 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:31.229 "is_configured": true, 00:34:31.229 "data_offset": 2048, 00:34:31.229 "data_size": 63488 00:34:31.229 }, 00:34:31.229 { 00:34:31.229 "name": "BaseBdev2", 00:34:31.229 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:31.229 "is_configured": true, 00:34:31.229 "data_offset": 2048, 00:34:31.229 "data_size": 63488 00:34:31.229 }, 00:34:31.229 { 00:34:31.229 "name": "BaseBdev3", 00:34:31.229 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:31.229 "is_configured": true, 00:34:31.229 "data_offset": 2048, 00:34:31.229 "data_size": 63488 00:34:31.229 }, 00:34:31.229 { 00:34:31.229 "name": "BaseBdev4", 00:34:31.229 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:31.229 "is_configured": true, 00:34:31.229 "data_offset": 2048, 00:34:31.229 "data_size": 63488 00:34:31.229 } 00:34:31.229 ] 00:34:31.229 }' 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:31.229 12:16:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:31.229 12:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:31.229 12:16:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:32.165 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:32.424 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:32.424 "name": "raid_bdev1", 00:34:32.424 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:32.424 "strip_size_kb": 64, 00:34:32.424 "state": "online", 00:34:32.424 "raid_level": "raid5f", 00:34:32.424 "superblock": true, 00:34:32.424 "num_base_bdevs": 4, 00:34:32.424 "num_base_bdevs_discovered": 4, 00:34:32.424 "num_base_bdevs_operational": 4, 00:34:32.424 "process": { 00:34:32.424 "type": "rebuild", 00:34:32.424 "target": "spare", 00:34:32.424 "progress": { 00:34:32.424 "blocks": 159360, 00:34:32.424 "percent": 83 00:34:32.424 } 00:34:32.424 }, 00:34:32.424 "base_bdevs_list": [ 00:34:32.424 { 00:34:32.424 "name": "spare", 00:34:32.424 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:32.424 "is_configured": true, 00:34:32.424 "data_offset": 2048, 00:34:32.424 "data_size": 63488 00:34:32.424 }, 00:34:32.424 { 00:34:32.424 "name": "BaseBdev2", 00:34:32.424 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:32.424 "is_configured": true, 00:34:32.424 "data_offset": 2048, 00:34:32.424 "data_size": 63488 00:34:32.424 }, 00:34:32.424 { 00:34:32.424 "name": "BaseBdev3", 00:34:32.424 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:32.424 "is_configured": true, 00:34:32.424 "data_offset": 2048, 00:34:32.424 "data_size": 63488 00:34:32.424 }, 00:34:32.424 { 00:34:32.424 "name": "BaseBdev4", 00:34:32.424 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:32.424 "is_configured": true, 00:34:32.424 "data_offset": 2048, 00:34:32.424 "data_size": 63488 00:34:32.424 } 00:34:32.424 ] 00:34:32.424 }' 00:34:32.424 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:32.683 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:32.683 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:32.683 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:32.683 12:16:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:33.621 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:33.880 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:33.880 "name": "raid_bdev1", 00:34:33.880 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:33.880 "strip_size_kb": 64, 00:34:33.880 "state": "online", 00:34:33.880 "raid_level": "raid5f", 00:34:33.880 "superblock": true, 00:34:33.880 "num_base_bdevs": 4, 00:34:33.880 "num_base_bdevs_discovered": 4, 00:34:33.880 "num_base_bdevs_operational": 4, 00:34:33.880 "process": { 00:34:33.880 "type": "rebuild", 00:34:33.880 "target": "spare", 00:34:33.880 "progress": { 00:34:33.880 "blocks": 184320, 00:34:33.880 "percent": 96 00:34:33.880 } 00:34:33.880 }, 00:34:33.880 "base_bdevs_list": [ 00:34:33.880 { 00:34:33.880 "name": "spare", 00:34:33.880 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:33.880 "is_configured": true, 00:34:33.880 "data_offset": 2048, 00:34:33.880 "data_size": 63488 00:34:33.880 }, 00:34:33.880 { 00:34:33.880 "name": "BaseBdev2", 00:34:33.880 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:33.880 "is_configured": true, 00:34:33.880 "data_offset": 2048, 00:34:33.880 "data_size": 63488 00:34:33.880 }, 00:34:33.880 { 00:34:33.880 "name": "BaseBdev3", 00:34:33.880 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:33.880 "is_configured": true, 00:34:33.880 "data_offset": 2048, 00:34:33.880 "data_size": 63488 00:34:33.880 }, 00:34:33.880 { 00:34:33.880 "name": "BaseBdev4", 00:34:33.880 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:33.880 "is_configured": true, 00:34:33.880 "data_offset": 2048, 00:34:33.880 "data_size": 63488 00:34:33.880 } 00:34:33.880 ] 00:34:33.880 }' 00:34:33.880 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:33.880 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:33.880 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:33.880 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:33.880 12:16:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@710 -- # sleep 1 00:34:34.139 [2024-07-21 12:16:32.934755] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:34.139 [2024-07-21 12:16:32.935002] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:34.139 [2024-07-21 12:16:32.935253] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.073 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:35.073 "name": "raid_bdev1", 00:34:35.073 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:35.073 "strip_size_kb": 64, 00:34:35.073 "state": "online", 00:34:35.073 "raid_level": "raid5f", 00:34:35.073 "superblock": true, 00:34:35.073 "num_base_bdevs": 4, 00:34:35.073 "num_base_bdevs_discovered": 4, 00:34:35.073 "num_base_bdevs_operational": 4, 00:34:35.073 "base_bdevs_list": [ 00:34:35.073 { 00:34:35.073 "name": "spare", 00:34:35.073 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:35.073 "is_configured": true, 00:34:35.073 "data_offset": 2048, 00:34:35.073 "data_size": 63488 00:34:35.073 }, 00:34:35.073 { 00:34:35.073 "name": "BaseBdev2", 00:34:35.073 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:35.073 "is_configured": true, 00:34:35.073 "data_offset": 2048, 00:34:35.073 "data_size": 63488 00:34:35.073 }, 00:34:35.073 { 00:34:35.073 "name": "BaseBdev3", 00:34:35.073 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:35.073 "is_configured": true, 00:34:35.073 "data_offset": 2048, 00:34:35.073 "data_size": 63488 00:34:35.073 }, 00:34:35.073 { 00:34:35.073 "name": "BaseBdev4", 00:34:35.073 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:35.073 "is_configured": true, 00:34:35.073 "data_offset": 2048, 00:34:35.073 "data_size": 63488 00:34:35.073 } 00:34:35.073 ] 00:34:35.073 }' 00:34:35.331 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:35.331 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:35.331 12:16:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # break 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.331 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:35.590 "name": "raid_bdev1", 00:34:35.590 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:35.590 "strip_size_kb": 64, 00:34:35.590 "state": "online", 00:34:35.590 "raid_level": "raid5f", 00:34:35.590 "superblock": true, 00:34:35.590 "num_base_bdevs": 4, 00:34:35.590 "num_base_bdevs_discovered": 4, 00:34:35.590 "num_base_bdevs_operational": 4, 00:34:35.590 "base_bdevs_list": [ 00:34:35.590 { 00:34:35.590 "name": "spare", 00:34:35.590 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:35.590 "is_configured": true, 00:34:35.590 "data_offset": 2048, 00:34:35.590 "data_size": 63488 00:34:35.590 }, 00:34:35.590 { 00:34:35.590 "name": "BaseBdev2", 00:34:35.590 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:35.590 "is_configured": true, 00:34:35.590 "data_offset": 2048, 00:34:35.590 "data_size": 63488 00:34:35.590 }, 00:34:35.590 { 00:34:35.590 "name": "BaseBdev3", 00:34:35.590 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:35.590 "is_configured": true, 00:34:35.590 "data_offset": 2048, 00:34:35.590 "data_size": 63488 00:34:35.590 }, 00:34:35.590 { 00:34:35.590 "name": "BaseBdev4", 00:34:35.590 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:35.590 "is_configured": true, 00:34:35.590 "data_offset": 2048, 00:34:35.590 "data_size": 63488 00:34:35.590 } 00:34:35.590 ] 00:34:35.590 }' 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:35.590 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:35.848 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:35.848 "name": "raid_bdev1", 00:34:35.848 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:35.848 "strip_size_kb": 64, 00:34:35.848 "state": "online", 00:34:35.848 "raid_level": "raid5f", 00:34:35.848 "superblock": true, 00:34:35.848 "num_base_bdevs": 4, 00:34:35.848 "num_base_bdevs_discovered": 4, 00:34:35.848 "num_base_bdevs_operational": 4, 00:34:35.848 "base_bdevs_list": [ 00:34:35.848 { 00:34:35.848 "name": "spare", 00:34:35.848 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:35.848 "is_configured": true, 00:34:35.848 "data_offset": 2048, 00:34:35.848 "data_size": 63488 00:34:35.848 }, 00:34:35.848 { 00:34:35.848 "name": "BaseBdev2", 00:34:35.848 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:35.848 "is_configured": true, 00:34:35.848 "data_offset": 2048, 00:34:35.848 "data_size": 63488 00:34:35.848 }, 00:34:35.848 { 00:34:35.848 "name": "BaseBdev3", 00:34:35.848 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:35.848 "is_configured": true, 00:34:35.848 "data_offset": 2048, 00:34:35.848 "data_size": 63488 00:34:35.848 }, 00:34:35.848 { 00:34:35.848 "name": "BaseBdev4", 00:34:35.848 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:35.848 "is_configured": true, 00:34:35.848 "data_offset": 2048, 00:34:35.848 "data_size": 63488 00:34:35.848 } 00:34:35.848 ] 00:34:35.848 }' 00:34:35.848 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:35.848 12:16:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:36.415 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:34:36.673 [2024-07-21 12:16:35.384248] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:36.673 [2024-07-21 12:16:35.384397] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:36.673 [2024-07-21 12:16:35.384583] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:36.673 [2024-07-21 12:16:35.384797] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:36.673 [2024-07-21 12:16:35.384909] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:34:36.673 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:36.673 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # jq length 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:36.931 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:37.189 /dev/nbd0 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:37.189 1+0 records in 00:34:37.189 1+0 records out 00:34:37.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536585 s, 7.6 MB/s 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:37.189 12:16:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:34:37.447 /dev/nbd1 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@865 -- # local i 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # break 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:37.447 1+0 records in 00:34:37.447 1+0 records out 00:34:37.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000728162 s, 5.6 MB/s 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@882 -- # size=4096 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # return 0 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:37.447 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:34:38.012 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:38.012 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:38.012 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:38.012 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:38.012 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:38.012 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:38.012 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:34:38.013 12:16:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:38.271 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:38.529 [2024-07-21 12:16:37.258564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:38.529 [2024-07-21 12:16:37.258789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:38.529 [2024-07-21 12:16:37.258858] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:38.529 [2024-07-21 12:16:37.259187] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:38.529 [2024-07-21 12:16:37.261551] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:38.529 [2024-07-21 12:16:37.261729] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:38.529 [2024-07-21 12:16:37.261910] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:38.529 [2024-07-21 12:16:37.262087] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:38.529 [2024-07-21 12:16:37.262358] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:38.529 [2024-07-21 12:16:37.262601] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:38.529 [2024-07-21 12:16:37.262842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:38.529 spare 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:38.529 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.529 [2024-07-21 12:16:37.363063] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ba80 00:34:38.529 [2024-07-21 12:16:37.363171] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:34:38.529 [2024-07-21 12:16:37.363317] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:34:38.529 [2024-07-21 12:16:37.364110] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ba80 00:34:38.529 [2024-07-21 12:16:37.364213] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ba80 00:34:38.529 [2024-07-21 12:16:37.364441] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:38.787 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:38.787 "name": "raid_bdev1", 00:34:38.787 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:38.787 "strip_size_kb": 64, 00:34:38.787 "state": "online", 00:34:38.787 "raid_level": "raid5f", 00:34:38.787 "superblock": true, 00:34:38.787 "num_base_bdevs": 4, 00:34:38.787 "num_base_bdevs_discovered": 4, 00:34:38.787 "num_base_bdevs_operational": 4, 00:34:38.787 "base_bdevs_list": [ 00:34:38.787 { 00:34:38.787 "name": "spare", 00:34:38.787 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:38.787 "is_configured": true, 00:34:38.787 "data_offset": 2048, 00:34:38.787 "data_size": 63488 00:34:38.787 }, 00:34:38.787 { 00:34:38.787 "name": "BaseBdev2", 00:34:38.787 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:38.787 "is_configured": true, 00:34:38.787 "data_offset": 2048, 00:34:38.787 "data_size": 63488 00:34:38.787 }, 00:34:38.787 { 00:34:38.787 "name": "BaseBdev3", 00:34:38.787 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:38.787 "is_configured": true, 00:34:38.787 "data_offset": 2048, 00:34:38.787 "data_size": 63488 00:34:38.787 }, 00:34:38.787 { 00:34:38.787 "name": "BaseBdev4", 00:34:38.787 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:38.787 "is_configured": true, 00:34:38.787 "data_offset": 2048, 00:34:38.787 "data_size": 63488 00:34:38.787 } 00:34:38.787 ] 00:34:38.787 }' 00:34:38.787 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:38.787 12:16:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:39.353 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:39.353 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:39.353 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:39.353 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:39.353 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:39.353 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.353 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.612 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:39.612 "name": "raid_bdev1", 00:34:39.612 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:39.612 "strip_size_kb": 64, 00:34:39.612 "state": "online", 00:34:39.612 "raid_level": "raid5f", 00:34:39.612 "superblock": true, 00:34:39.612 "num_base_bdevs": 4, 00:34:39.612 "num_base_bdevs_discovered": 4, 00:34:39.612 "num_base_bdevs_operational": 4, 00:34:39.612 "base_bdevs_list": [ 00:34:39.612 { 00:34:39.612 "name": "spare", 00:34:39.612 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:39.612 "is_configured": true, 00:34:39.612 "data_offset": 2048, 00:34:39.612 "data_size": 63488 00:34:39.612 }, 00:34:39.612 { 00:34:39.612 "name": "BaseBdev2", 00:34:39.612 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:39.612 "is_configured": true, 00:34:39.612 "data_offset": 2048, 00:34:39.612 "data_size": 63488 00:34:39.612 }, 00:34:39.612 { 00:34:39.612 "name": "BaseBdev3", 00:34:39.612 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:39.612 "is_configured": true, 00:34:39.612 "data_offset": 2048, 00:34:39.612 "data_size": 63488 00:34:39.612 }, 00:34:39.612 { 00:34:39.612 "name": "BaseBdev4", 00:34:39.612 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:39.612 "is_configured": true, 00:34:39.612 "data_offset": 2048, 00:34:39.612 "data_size": 63488 00:34:39.612 } 00:34:39.612 ] 00:34:39.612 }' 00:34:39.612 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:39.612 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:39.612 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:39.612 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:39.612 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:39.612 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:39.871 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:34:39.871 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:34:40.129 [2024-07-21 12:16:38.903205] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:40.129 12:16:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.387 12:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:40.387 "name": "raid_bdev1", 00:34:40.388 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:40.388 "strip_size_kb": 64, 00:34:40.388 "state": "online", 00:34:40.388 "raid_level": "raid5f", 00:34:40.388 "superblock": true, 00:34:40.388 "num_base_bdevs": 4, 00:34:40.388 "num_base_bdevs_discovered": 3, 00:34:40.388 "num_base_bdevs_operational": 3, 00:34:40.388 "base_bdevs_list": [ 00:34:40.388 { 00:34:40.388 "name": null, 00:34:40.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:40.388 "is_configured": false, 00:34:40.388 "data_offset": 2048, 00:34:40.388 "data_size": 63488 00:34:40.388 }, 00:34:40.388 { 00:34:40.388 "name": "BaseBdev2", 00:34:40.388 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:40.388 "is_configured": true, 00:34:40.388 "data_offset": 2048, 00:34:40.388 "data_size": 63488 00:34:40.388 }, 00:34:40.388 { 00:34:40.388 "name": "BaseBdev3", 00:34:40.388 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:40.388 "is_configured": true, 00:34:40.388 "data_offset": 2048, 00:34:40.388 "data_size": 63488 00:34:40.388 }, 00:34:40.388 { 00:34:40.388 "name": "BaseBdev4", 00:34:40.388 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:40.388 "is_configured": true, 00:34:40.388 "data_offset": 2048, 00:34:40.388 "data_size": 63488 00:34:40.388 } 00:34:40.388 ] 00:34:40.388 }' 00:34:40.388 12:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:40.388 12:16:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:40.955 12:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:34:41.214 [2024-07-21 12:16:39.975351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:41.214 [2024-07-21 12:16:39.975646] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:41.214 [2024-07-21 12:16:39.975769] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:41.214 [2024-07-21 12:16:39.975865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:41.214 [2024-07-21 12:16:39.979821] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:34:41.214 [2024-07-21 12:16:39.982175] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:41.214 12:16:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # sleep 1 00:34:42.149 12:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:42.149 12:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:42.149 12:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:42.149 12:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:42.149 12:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:42.149 12:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.149 12:16:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.408 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:42.408 "name": "raid_bdev1", 00:34:42.408 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:42.408 "strip_size_kb": 64, 00:34:42.408 "state": "online", 00:34:42.408 "raid_level": "raid5f", 00:34:42.408 "superblock": true, 00:34:42.408 "num_base_bdevs": 4, 00:34:42.408 "num_base_bdevs_discovered": 4, 00:34:42.408 "num_base_bdevs_operational": 4, 00:34:42.408 "process": { 00:34:42.408 "type": "rebuild", 00:34:42.408 "target": "spare", 00:34:42.408 "progress": { 00:34:42.408 "blocks": 23040, 00:34:42.408 "percent": 12 00:34:42.408 } 00:34:42.408 }, 00:34:42.408 "base_bdevs_list": [ 00:34:42.408 { 00:34:42.408 "name": "spare", 00:34:42.408 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:42.408 "is_configured": true, 00:34:42.408 "data_offset": 2048, 00:34:42.408 "data_size": 63488 00:34:42.408 }, 00:34:42.408 { 00:34:42.408 "name": "BaseBdev2", 00:34:42.408 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:42.408 "is_configured": true, 00:34:42.408 "data_offset": 2048, 00:34:42.408 "data_size": 63488 00:34:42.408 }, 00:34:42.408 { 00:34:42.408 "name": "BaseBdev3", 00:34:42.408 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:42.408 "is_configured": true, 00:34:42.408 "data_offset": 2048, 00:34:42.408 "data_size": 63488 00:34:42.408 }, 00:34:42.408 { 00:34:42.408 "name": "BaseBdev4", 00:34:42.408 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:42.408 "is_configured": true, 00:34:42.408 "data_offset": 2048, 00:34:42.408 "data_size": 63488 00:34:42.408 } 00:34:42.408 ] 00:34:42.408 }' 00:34:42.408 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:42.666 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:42.666 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:42.666 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:42.666 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:42.925 [2024-07-21 12:16:41.569306] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:42.925 [2024-07-21 12:16:41.594056] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:42.925 [2024-07-21 12:16:41.594310] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:42.925 [2024-07-21 12:16:41.594447] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:42.925 [2024-07-21 12:16:41.594493] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:42.925 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:43.198 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:43.198 "name": "raid_bdev1", 00:34:43.198 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:43.198 "strip_size_kb": 64, 00:34:43.198 "state": "online", 00:34:43.198 "raid_level": "raid5f", 00:34:43.198 "superblock": true, 00:34:43.198 "num_base_bdevs": 4, 00:34:43.198 "num_base_bdevs_discovered": 3, 00:34:43.198 "num_base_bdevs_operational": 3, 00:34:43.198 "base_bdevs_list": [ 00:34:43.198 { 00:34:43.198 "name": null, 00:34:43.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:43.198 "is_configured": false, 00:34:43.198 "data_offset": 2048, 00:34:43.198 "data_size": 63488 00:34:43.198 }, 00:34:43.198 { 00:34:43.198 "name": "BaseBdev2", 00:34:43.198 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:43.198 "is_configured": true, 00:34:43.198 "data_offset": 2048, 00:34:43.198 "data_size": 63488 00:34:43.198 }, 00:34:43.198 { 00:34:43.198 "name": "BaseBdev3", 00:34:43.198 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:43.198 "is_configured": true, 00:34:43.198 "data_offset": 2048, 00:34:43.198 "data_size": 63488 00:34:43.198 }, 00:34:43.198 { 00:34:43.198 "name": "BaseBdev4", 00:34:43.198 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:43.198 "is_configured": true, 00:34:43.198 "data_offset": 2048, 00:34:43.198 "data_size": 63488 00:34:43.198 } 00:34:43.198 ] 00:34:43.198 }' 00:34:43.198 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:43.198 12:16:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:43.778 12:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:34:44.035 [2024-07-21 12:16:42.738277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:44.035 [2024-07-21 12:16:42.738530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:44.035 [2024-07-21 12:16:42.738720] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:34:44.035 [2024-07-21 12:16:42.738845] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:44.035 [2024-07-21 12:16:42.739438] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:44.035 [2024-07-21 12:16:42.739597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:44.036 [2024-07-21 12:16:42.739817] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:44.036 [2024-07-21 12:16:42.739927] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:44.036 [2024-07-21 12:16:42.740057] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:44.036 [2024-07-21 12:16:42.740152] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:44.036 [2024-07-21 12:16:42.744731] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000496b0 00:34:44.036 spare 00:34:44.036 [2024-07-21 12:16:42.747337] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:44.036 12:16:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # sleep 1 00:34:44.966 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:44.966 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:44.966 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:34:44.966 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=spare 00:34:44.966 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:44.966 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:44.966 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.225 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:45.225 "name": "raid_bdev1", 00:34:45.225 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:45.225 "strip_size_kb": 64, 00:34:45.225 "state": "online", 00:34:45.225 "raid_level": "raid5f", 00:34:45.225 "superblock": true, 00:34:45.225 "num_base_bdevs": 4, 00:34:45.225 "num_base_bdevs_discovered": 4, 00:34:45.225 "num_base_bdevs_operational": 4, 00:34:45.225 "process": { 00:34:45.225 "type": "rebuild", 00:34:45.225 "target": "spare", 00:34:45.225 "progress": { 00:34:45.225 "blocks": 21120, 00:34:45.225 "percent": 11 00:34:45.225 } 00:34:45.225 }, 00:34:45.225 "base_bdevs_list": [ 00:34:45.225 { 00:34:45.225 "name": "spare", 00:34:45.225 "uuid": "53d6e161-1b0a-573a-a4c8-693ff84e7c48", 00:34:45.225 "is_configured": true, 00:34:45.225 "data_offset": 2048, 00:34:45.225 "data_size": 63488 00:34:45.225 }, 00:34:45.225 { 00:34:45.225 "name": "BaseBdev2", 00:34:45.225 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:45.225 "is_configured": true, 00:34:45.225 "data_offset": 2048, 00:34:45.225 "data_size": 63488 00:34:45.225 }, 00:34:45.225 { 00:34:45.225 "name": "BaseBdev3", 00:34:45.225 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:45.225 "is_configured": true, 00:34:45.225 "data_offset": 2048, 00:34:45.225 "data_size": 63488 00:34:45.225 }, 00:34:45.225 { 00:34:45.225 "name": "BaseBdev4", 00:34:45.225 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:45.225 "is_configured": true, 00:34:45.225 "data_offset": 2048, 00:34:45.225 "data_size": 63488 00:34:45.225 } 00:34:45.225 ] 00:34:45.225 }' 00:34:45.225 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:45.225 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:45.225 12:16:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:45.225 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:34:45.225 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:34:45.483 [2024-07-21 12:16:44.285134] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:45.742 [2024-07-21 12:16:44.359081] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:45.742 [2024-07-21 12:16:44.359305] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:45.742 [2024-07-21 12:16:44.359439] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:45.742 [2024-07-21 12:16:44.359483] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:45.742 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:46.000 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:46.000 "name": "raid_bdev1", 00:34:46.000 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:46.000 "strip_size_kb": 64, 00:34:46.000 "state": "online", 00:34:46.000 "raid_level": "raid5f", 00:34:46.000 "superblock": true, 00:34:46.000 "num_base_bdevs": 4, 00:34:46.000 "num_base_bdevs_discovered": 3, 00:34:46.000 "num_base_bdevs_operational": 3, 00:34:46.000 "base_bdevs_list": [ 00:34:46.000 { 00:34:46.000 "name": null, 00:34:46.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.000 "is_configured": false, 00:34:46.000 "data_offset": 2048, 00:34:46.000 "data_size": 63488 00:34:46.000 }, 00:34:46.000 { 00:34:46.000 "name": "BaseBdev2", 00:34:46.000 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:46.000 "is_configured": true, 00:34:46.000 "data_offset": 2048, 00:34:46.000 "data_size": 63488 00:34:46.000 }, 00:34:46.000 { 00:34:46.000 "name": "BaseBdev3", 00:34:46.000 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:46.000 "is_configured": true, 00:34:46.000 "data_offset": 2048, 00:34:46.000 "data_size": 63488 00:34:46.000 }, 00:34:46.000 { 00:34:46.000 "name": "BaseBdev4", 00:34:46.000 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:46.000 "is_configured": true, 00:34:46.000 "data_offset": 2048, 00:34:46.000 "data_size": 63488 00:34:46.000 } 00:34:46.000 ] 00:34:46.000 }' 00:34:46.000 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:46.000 12:16:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:46.565 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:46.565 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:46.565 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:46.565 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:46.565 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:46.565 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:46.565 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:46.823 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:46.823 "name": "raid_bdev1", 00:34:46.823 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:46.823 "strip_size_kb": 64, 00:34:46.823 "state": "online", 00:34:46.823 "raid_level": "raid5f", 00:34:46.823 "superblock": true, 00:34:46.823 "num_base_bdevs": 4, 00:34:46.823 "num_base_bdevs_discovered": 3, 00:34:46.823 "num_base_bdevs_operational": 3, 00:34:46.823 "base_bdevs_list": [ 00:34:46.823 { 00:34:46.823 "name": null, 00:34:46.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:46.823 "is_configured": false, 00:34:46.823 "data_offset": 2048, 00:34:46.823 "data_size": 63488 00:34:46.823 }, 00:34:46.823 { 00:34:46.823 "name": "BaseBdev2", 00:34:46.823 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:46.823 "is_configured": true, 00:34:46.823 "data_offset": 2048, 00:34:46.823 "data_size": 63488 00:34:46.823 }, 00:34:46.823 { 00:34:46.823 "name": "BaseBdev3", 00:34:46.823 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:46.823 "is_configured": true, 00:34:46.823 "data_offset": 2048, 00:34:46.824 "data_size": 63488 00:34:46.824 }, 00:34:46.824 { 00:34:46.824 "name": "BaseBdev4", 00:34:46.824 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:46.824 "is_configured": true, 00:34:46.824 "data_offset": 2048, 00:34:46.824 "data_size": 63488 00:34:46.824 } 00:34:46.824 ] 00:34:46.824 }' 00:34:46.824 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:46.824 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:46.824 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:46.824 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:46.824 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:34:47.081 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:47.339 [2024-07-21 12:16:45.971189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:47.339 [2024-07-21 12:16:45.971420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.339 [2024-07-21 12:16:45.971523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:34:47.339 [2024-07-21 12:16:45.971807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.339 [2024-07-21 12:16:45.972462] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.339 [2024-07-21 12:16:45.972616] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:47.339 [2024-07-21 12:16:45.972815] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:47.339 [2024-07-21 12:16:45.972923] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:47.339 [2024-07-21 12:16:45.973046] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:47.339 BaseBdev1 00:34:47.339 12:16:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # sleep 1 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:48.273 12:16:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:48.531 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:48.531 "name": "raid_bdev1", 00:34:48.531 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:48.531 "strip_size_kb": 64, 00:34:48.531 "state": "online", 00:34:48.531 "raid_level": "raid5f", 00:34:48.531 "superblock": true, 00:34:48.531 "num_base_bdevs": 4, 00:34:48.531 "num_base_bdevs_discovered": 3, 00:34:48.531 "num_base_bdevs_operational": 3, 00:34:48.531 "base_bdevs_list": [ 00:34:48.531 { 00:34:48.531 "name": null, 00:34:48.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:48.531 "is_configured": false, 00:34:48.531 "data_offset": 2048, 00:34:48.531 "data_size": 63488 00:34:48.531 }, 00:34:48.531 { 00:34:48.531 "name": "BaseBdev2", 00:34:48.531 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:48.531 "is_configured": true, 00:34:48.531 "data_offset": 2048, 00:34:48.531 "data_size": 63488 00:34:48.531 }, 00:34:48.531 { 00:34:48.531 "name": "BaseBdev3", 00:34:48.531 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:48.531 "is_configured": true, 00:34:48.532 "data_offset": 2048, 00:34:48.532 "data_size": 63488 00:34:48.532 }, 00:34:48.532 { 00:34:48.532 "name": "BaseBdev4", 00:34:48.532 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:48.532 "is_configured": true, 00:34:48.532 "data_offset": 2048, 00:34:48.532 "data_size": 63488 00:34:48.532 } 00:34:48.532 ] 00:34:48.532 }' 00:34:48.532 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:48.532 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.097 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:49.097 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:49.097 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:49.097 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:49.097 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:49.097 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:49.097 12:16:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:49.356 "name": "raid_bdev1", 00:34:49.356 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:49.356 "strip_size_kb": 64, 00:34:49.356 "state": "online", 00:34:49.356 "raid_level": "raid5f", 00:34:49.356 "superblock": true, 00:34:49.356 "num_base_bdevs": 4, 00:34:49.356 "num_base_bdevs_discovered": 3, 00:34:49.356 "num_base_bdevs_operational": 3, 00:34:49.356 "base_bdevs_list": [ 00:34:49.356 { 00:34:49.356 "name": null, 00:34:49.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:49.356 "is_configured": false, 00:34:49.356 "data_offset": 2048, 00:34:49.356 "data_size": 63488 00:34:49.356 }, 00:34:49.356 { 00:34:49.356 "name": "BaseBdev2", 00:34:49.356 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:49.356 "is_configured": true, 00:34:49.356 "data_offset": 2048, 00:34:49.356 "data_size": 63488 00:34:49.356 }, 00:34:49.356 { 00:34:49.356 "name": "BaseBdev3", 00:34:49.356 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:49.356 "is_configured": true, 00:34:49.356 "data_offset": 2048, 00:34:49.356 "data_size": 63488 00:34:49.356 }, 00:34:49.356 { 00:34:49.356 "name": "BaseBdev4", 00:34:49.356 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:49.356 "is_configured": true, 00:34:49.356 "data_offset": 2048, 00:34:49.356 "data_size": 63488 00:34:49.356 } 00:34:49.356 ] 00:34:49.356 }' 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@648 -- # local es=0 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:34:49.356 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:49.616 [2024-07-21 12:16:48.440983] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:49.616 [2024-07-21 12:16:48.441361] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:49.616 [2024-07-21 12:16:48.441486] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:49.616 request: 00:34:49.616 { 00:34:49.616 "raid_bdev": "raid_bdev1", 00:34:49.616 "base_bdev": "BaseBdev1", 00:34:49.616 "method": "bdev_raid_add_base_bdev", 00:34:49.616 "req_id": 1 00:34:49.616 } 00:34:49.616 Got JSON-RPC error response 00:34:49.616 response: 00:34:49.616 { 00:34:49.616 "code": -22, 00:34:49.616 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:49.616 } 00:34:49.616 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@651 -- # es=1 00:34:49.616 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:49.616 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:49.616 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:49.616 12:16:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # sleep 1 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid5f 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:50.990 "name": "raid_bdev1", 00:34:50.990 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:50.990 "strip_size_kb": 64, 00:34:50.990 "state": "online", 00:34:50.990 "raid_level": "raid5f", 00:34:50.990 "superblock": true, 00:34:50.990 "num_base_bdevs": 4, 00:34:50.990 "num_base_bdevs_discovered": 3, 00:34:50.990 "num_base_bdevs_operational": 3, 00:34:50.990 "base_bdevs_list": [ 00:34:50.990 { 00:34:50.990 "name": null, 00:34:50.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:50.990 "is_configured": false, 00:34:50.990 "data_offset": 2048, 00:34:50.990 "data_size": 63488 00:34:50.990 }, 00:34:50.990 { 00:34:50.990 "name": "BaseBdev2", 00:34:50.990 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:50.990 "is_configured": true, 00:34:50.990 "data_offset": 2048, 00:34:50.990 "data_size": 63488 00:34:50.990 }, 00:34:50.990 { 00:34:50.990 "name": "BaseBdev3", 00:34:50.990 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:50.990 "is_configured": true, 00:34:50.990 "data_offset": 2048, 00:34:50.990 "data_size": 63488 00:34:50.990 }, 00:34:50.990 { 00:34:50.990 "name": "BaseBdev4", 00:34:50.990 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:50.990 "is_configured": true, 00:34:50.990 "data_offset": 2048, 00:34:50.990 "data_size": 63488 00:34:50.990 } 00:34:50.990 ] 00:34:50.990 }' 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:50.990 12:16:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.558 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:51.558 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:34:51.558 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:34:51.558 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@184 -- # local target=none 00:34:51.558 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:34:51.558 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:51.558 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:51.818 "name": "raid_bdev1", 00:34:51.818 "uuid": "cc45156c-49f6-4200-a077-180c7f990b71", 00:34:51.818 "strip_size_kb": 64, 00:34:51.818 "state": "online", 00:34:51.818 "raid_level": "raid5f", 00:34:51.818 "superblock": true, 00:34:51.818 "num_base_bdevs": 4, 00:34:51.818 "num_base_bdevs_discovered": 3, 00:34:51.818 "num_base_bdevs_operational": 3, 00:34:51.818 "base_bdevs_list": [ 00:34:51.818 { 00:34:51.818 "name": null, 00:34:51.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.818 "is_configured": false, 00:34:51.818 "data_offset": 2048, 00:34:51.818 "data_size": 63488 00:34:51.818 }, 00:34:51.818 { 00:34:51.818 "name": "BaseBdev2", 00:34:51.818 "uuid": "b283e33f-8cb2-5db5-93ec-b24469888fe4", 00:34:51.818 "is_configured": true, 00:34:51.818 "data_offset": 2048, 00:34:51.818 "data_size": 63488 00:34:51.818 }, 00:34:51.818 { 00:34:51.818 "name": "BaseBdev3", 00:34:51.818 "uuid": "a500e070-d853-50ea-9f1a-91a58ec91378", 00:34:51.818 "is_configured": true, 00:34:51.818 "data_offset": 2048, 00:34:51.818 "data_size": 63488 00:34:51.818 }, 00:34:51.818 { 00:34:51.818 "name": "BaseBdev4", 00:34:51.818 "uuid": "2c513423-7c96-533b-9b32-7c075054565b", 00:34:51.818 "is_configured": true, 00:34:51.818 "data_offset": 2048, 00:34:51.818 "data_size": 63488 00:34:51.818 } 00:34:51.818 ] 00:34:51.818 }' 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@782 -- # killprocess 167923 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@946 -- # '[' -z 167923 ']' 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # kill -0 167923 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # uname 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 167923 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # echo 'killing process with pid 167923' 00:34:51.818 killing process with pid 167923 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@965 -- # kill 167923 00:34:51.818 Received shutdown signal, test time was about 60.000000 seconds 00:34:51.818 00:34:51.818 Latency(us) 00:34:51.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:51.818 =================================================================================================================== 00:34:51.818 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:51.818 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # wait 167923 00:34:51.818 [2024-07-21 12:16:50.619625] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:51.818 [2024-07-21 12:16:50.619860] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:51.818 [2024-07-21 12:16:50.620046] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:51.818 [2024-07-21 12:16:50.620147] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state offline 00:34:51.818 [2024-07-21 12:16:50.676809] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:52.386 ************************************ 00:34:52.386 END TEST raid5f_rebuild_test_sb 00:34:52.386 ************************************ 00:34:52.386 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # return 0 00:34:52.386 00:34:52.386 real 0m38.431s 00:34:52.386 user 0m59.850s 00:34:52.386 sys 0m3.938s 00:34:52.386 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1122 -- # xtrace_disable 00:34:52.386 12:16:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:52.386 12:16:51 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:34:52.386 12:16:51 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:34:52.386 12:16:51 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:34:52.386 12:16:51 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:34:52.386 12:16:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:52.386 ************************************ 00:34:52.386 START TEST raid_state_function_test_sb_4k 00:34:52.386 ************************************ 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:34:52.386 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=168923 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 168923' 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:34:52.387 Process raid pid: 168923 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 168923 /var/tmp/spdk-raid.sock 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 168923 ']' 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:34:52.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:34:52.387 12:16:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:52.387 [2024-07-21 12:16:51.104809] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:34:52.387 [2024-07-21 12:16:51.105888] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.646 [2024-07-21 12:16:51.278483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.646 [2024-07-21 12:16:51.348494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.646 [2024-07-21 12:16:51.419269] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:53.214 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:34:53.214 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:34:53.214 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:53.473 [2024-07-21 12:16:52.233383] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:53.473 [2024-07-21 12:16:52.233705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:53.473 [2024-07-21 12:16:52.233825] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:53.473 [2024-07-21 12:16:52.233887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:53.473 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:53.474 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:53.474 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:53.474 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:53.733 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:53.733 "name": "Existed_Raid", 00:34:53.733 "uuid": "1cbeb14a-3cd7-4e9a-a04d-669966058d50", 00:34:53.733 "strip_size_kb": 0, 00:34:53.733 "state": "configuring", 00:34:53.733 "raid_level": "raid1", 00:34:53.733 "superblock": true, 00:34:53.733 "num_base_bdevs": 2, 00:34:53.733 "num_base_bdevs_discovered": 0, 00:34:53.733 "num_base_bdevs_operational": 2, 00:34:53.733 "base_bdevs_list": [ 00:34:53.733 { 00:34:53.733 "name": "BaseBdev1", 00:34:53.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.733 "is_configured": false, 00:34:53.733 "data_offset": 0, 00:34:53.733 "data_size": 0 00:34:53.733 }, 00:34:53.733 { 00:34:53.733 "name": "BaseBdev2", 00:34:53.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.733 "is_configured": false, 00:34:53.733 "data_offset": 0, 00:34:53.733 "data_size": 0 00:34:53.733 } 00:34:53.733 ] 00:34:53.733 }' 00:34:53.733 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:53.733 12:16:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:54.300 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:54.557 [2024-07-21 12:16:53.353457] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:54.557 [2024-07-21 12:16:53.353656] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:34:54.557 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:54.815 [2024-07-21 12:16:53.541498] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:54.815 [2024-07-21 12:16:53.541749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:54.815 [2024-07-21 12:16:53.541859] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:54.815 [2024-07-21 12:16:53.541930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:54.815 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:34:55.073 [2024-07-21 12:16:53.755541] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:55.073 BaseBdev1 00:34:55.073 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:34:55.073 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:34:55.073 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:34:55.073 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:34:55.073 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:34:55.073 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:34:55.073 12:16:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:55.331 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:55.589 [ 00:34:55.589 { 00:34:55.589 "name": "BaseBdev1", 00:34:55.589 "aliases": [ 00:34:55.589 "7b3cd543-52f9-46ee-8995-9ef7a1544c82" 00:34:55.589 ], 00:34:55.589 "product_name": "Malloc disk", 00:34:55.589 "block_size": 4096, 00:34:55.589 "num_blocks": 8192, 00:34:55.589 "uuid": "7b3cd543-52f9-46ee-8995-9ef7a1544c82", 00:34:55.589 "assigned_rate_limits": { 00:34:55.589 "rw_ios_per_sec": 0, 00:34:55.589 "rw_mbytes_per_sec": 0, 00:34:55.589 "r_mbytes_per_sec": 0, 00:34:55.589 "w_mbytes_per_sec": 0 00:34:55.589 }, 00:34:55.589 "claimed": true, 00:34:55.589 "claim_type": "exclusive_write", 00:34:55.589 "zoned": false, 00:34:55.589 "supported_io_types": { 00:34:55.589 "read": true, 00:34:55.589 "write": true, 00:34:55.589 "unmap": true, 00:34:55.589 "write_zeroes": true, 00:34:55.589 "flush": true, 00:34:55.589 "reset": true, 00:34:55.589 "compare": false, 00:34:55.589 "compare_and_write": false, 00:34:55.589 "abort": true, 00:34:55.589 "nvme_admin": false, 00:34:55.589 "nvme_io": false 00:34:55.589 }, 00:34:55.589 "memory_domains": [ 00:34:55.589 { 00:34:55.589 "dma_device_id": "system", 00:34:55.589 "dma_device_type": 1 00:34:55.589 }, 00:34:55.589 { 00:34:55.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:55.589 "dma_device_type": 2 00:34:55.589 } 00:34:55.589 ], 00:34:55.589 "driver_specific": {} 00:34:55.589 } 00:34:55.589 ] 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:55.589 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:55.847 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:55.847 "name": "Existed_Raid", 00:34:55.847 "uuid": "9d345801-5e06-415e-97b3-9374ec6366c1", 00:34:55.847 "strip_size_kb": 0, 00:34:55.847 "state": "configuring", 00:34:55.847 "raid_level": "raid1", 00:34:55.847 "superblock": true, 00:34:55.847 "num_base_bdevs": 2, 00:34:55.847 "num_base_bdevs_discovered": 1, 00:34:55.847 "num_base_bdevs_operational": 2, 00:34:55.847 "base_bdevs_list": [ 00:34:55.847 { 00:34:55.847 "name": "BaseBdev1", 00:34:55.847 "uuid": "7b3cd543-52f9-46ee-8995-9ef7a1544c82", 00:34:55.847 "is_configured": true, 00:34:55.847 "data_offset": 256, 00:34:55.847 "data_size": 7936 00:34:55.847 }, 00:34:55.847 { 00:34:55.847 "name": "BaseBdev2", 00:34:55.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:55.847 "is_configured": false, 00:34:55.847 "data_offset": 0, 00:34:55.847 "data_size": 0 00:34:55.847 } 00:34:55.847 ] 00:34:55.847 }' 00:34:55.847 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:55.847 12:16:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:56.413 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:34:56.670 [2024-07-21 12:16:55.299875] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:56.670 [2024-07-21 12:16:55.300164] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:34:56.670 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:34:56.927 [2024-07-21 12:16:55.551943] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:56.927 [2024-07-21 12:16:55.554128] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:56.927 [2024-07-21 12:16:55.554302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:56.927 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:56.927 "name": "Existed_Raid", 00:34:56.927 "uuid": "acef82da-1f90-43ad-9ee3-00dc675e2089", 00:34:56.927 "strip_size_kb": 0, 00:34:56.927 "state": "configuring", 00:34:56.927 "raid_level": "raid1", 00:34:56.927 "superblock": true, 00:34:56.927 "num_base_bdevs": 2, 00:34:56.927 "num_base_bdevs_discovered": 1, 00:34:56.927 "num_base_bdevs_operational": 2, 00:34:56.927 "base_bdevs_list": [ 00:34:56.927 { 00:34:56.927 "name": "BaseBdev1", 00:34:56.928 "uuid": "7b3cd543-52f9-46ee-8995-9ef7a1544c82", 00:34:56.928 "is_configured": true, 00:34:56.928 "data_offset": 256, 00:34:56.928 "data_size": 7936 00:34:56.928 }, 00:34:56.928 { 00:34:56.928 "name": "BaseBdev2", 00:34:56.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:56.928 "is_configured": false, 00:34:56.928 "data_offset": 0, 00:34:56.928 "data_size": 0 00:34:56.928 } 00:34:56.928 ] 00:34:56.928 }' 00:34:56.928 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:56.928 12:16:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:34:57.859 [2024-07-21 12:16:56.595240] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:57.859 [2024-07-21 12:16:56.595697] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:34:57.859 [2024-07-21 12:16:56.595821] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:34:57.859 [2024-07-21 12:16:56.596012] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:34:57.859 [2024-07-21 12:16:56.596579] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:34:57.859 [2024-07-21 12:16:56.596710] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:34:57.859 BaseBdev2 00:34:57.859 [2024-07-21 12:16:56.596964] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local i 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:34:57.859 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:34:58.117 12:16:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:58.375 [ 00:34:58.375 { 00:34:58.375 "name": "BaseBdev2", 00:34:58.375 "aliases": [ 00:34:58.375 "af3c5165-92d9-4a2b-841f-f201319a77cf" 00:34:58.375 ], 00:34:58.375 "product_name": "Malloc disk", 00:34:58.375 "block_size": 4096, 00:34:58.375 "num_blocks": 8192, 00:34:58.375 "uuid": "af3c5165-92d9-4a2b-841f-f201319a77cf", 00:34:58.375 "assigned_rate_limits": { 00:34:58.375 "rw_ios_per_sec": 0, 00:34:58.375 "rw_mbytes_per_sec": 0, 00:34:58.375 "r_mbytes_per_sec": 0, 00:34:58.375 "w_mbytes_per_sec": 0 00:34:58.375 }, 00:34:58.375 "claimed": true, 00:34:58.375 "claim_type": "exclusive_write", 00:34:58.375 "zoned": false, 00:34:58.375 "supported_io_types": { 00:34:58.375 "read": true, 00:34:58.375 "write": true, 00:34:58.375 "unmap": true, 00:34:58.375 "write_zeroes": true, 00:34:58.375 "flush": true, 00:34:58.375 "reset": true, 00:34:58.375 "compare": false, 00:34:58.375 "compare_and_write": false, 00:34:58.375 "abort": true, 00:34:58.375 "nvme_admin": false, 00:34:58.375 "nvme_io": false 00:34:58.375 }, 00:34:58.375 "memory_domains": [ 00:34:58.375 { 00:34:58.375 "dma_device_id": "system", 00:34:58.375 "dma_device_type": 1 00:34:58.375 }, 00:34:58.375 { 00:34:58.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.375 "dma_device_type": 2 00:34:58.375 } 00:34:58.375 ], 00:34:58.375 "driver_specific": {} 00:34:58.375 } 00:34:58.375 ] 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # return 0 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.375 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:34:58.633 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:34:58.633 "name": "Existed_Raid", 00:34:58.633 "uuid": "acef82da-1f90-43ad-9ee3-00dc675e2089", 00:34:58.633 "strip_size_kb": 0, 00:34:58.633 "state": "online", 00:34:58.633 "raid_level": "raid1", 00:34:58.633 "superblock": true, 00:34:58.633 "num_base_bdevs": 2, 00:34:58.633 "num_base_bdevs_discovered": 2, 00:34:58.633 "num_base_bdevs_operational": 2, 00:34:58.633 "base_bdevs_list": [ 00:34:58.633 { 00:34:58.633 "name": "BaseBdev1", 00:34:58.633 "uuid": "7b3cd543-52f9-46ee-8995-9ef7a1544c82", 00:34:58.633 "is_configured": true, 00:34:58.633 "data_offset": 256, 00:34:58.633 "data_size": 7936 00:34:58.633 }, 00:34:58.633 { 00:34:58.633 "name": "BaseBdev2", 00:34:58.633 "uuid": "af3c5165-92d9-4a2b-841f-f201319a77cf", 00:34:58.633 "is_configured": true, 00:34:58.633 "data_offset": 256, 00:34:58.633 "data_size": 7936 00:34:58.633 } 00:34:58.633 ] 00:34:58.633 }' 00:34:58.633 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:34:58.633 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:34:59.199 12:16:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:34:59.458 [2024-07-21 12:16:58.095703] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:59.458 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:34:59.458 "name": "Existed_Raid", 00:34:59.458 "aliases": [ 00:34:59.458 "acef82da-1f90-43ad-9ee3-00dc675e2089" 00:34:59.458 ], 00:34:59.458 "product_name": "Raid Volume", 00:34:59.458 "block_size": 4096, 00:34:59.458 "num_blocks": 7936, 00:34:59.458 "uuid": "acef82da-1f90-43ad-9ee3-00dc675e2089", 00:34:59.458 "assigned_rate_limits": { 00:34:59.458 "rw_ios_per_sec": 0, 00:34:59.458 "rw_mbytes_per_sec": 0, 00:34:59.458 "r_mbytes_per_sec": 0, 00:34:59.458 "w_mbytes_per_sec": 0 00:34:59.458 }, 00:34:59.458 "claimed": false, 00:34:59.458 "zoned": false, 00:34:59.458 "supported_io_types": { 00:34:59.458 "read": true, 00:34:59.458 "write": true, 00:34:59.458 "unmap": false, 00:34:59.458 "write_zeroes": true, 00:34:59.458 "flush": false, 00:34:59.458 "reset": true, 00:34:59.458 "compare": false, 00:34:59.458 "compare_and_write": false, 00:34:59.458 "abort": false, 00:34:59.458 "nvme_admin": false, 00:34:59.458 "nvme_io": false 00:34:59.458 }, 00:34:59.458 "memory_domains": [ 00:34:59.458 { 00:34:59.458 "dma_device_id": "system", 00:34:59.458 "dma_device_type": 1 00:34:59.458 }, 00:34:59.458 { 00:34:59.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.458 "dma_device_type": 2 00:34:59.458 }, 00:34:59.458 { 00:34:59.458 "dma_device_id": "system", 00:34:59.458 "dma_device_type": 1 00:34:59.458 }, 00:34:59.458 { 00:34:59.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.458 "dma_device_type": 2 00:34:59.458 } 00:34:59.458 ], 00:34:59.458 "driver_specific": { 00:34:59.458 "raid": { 00:34:59.458 "uuid": "acef82da-1f90-43ad-9ee3-00dc675e2089", 00:34:59.458 "strip_size_kb": 0, 00:34:59.458 "state": "online", 00:34:59.458 "raid_level": "raid1", 00:34:59.458 "superblock": true, 00:34:59.458 "num_base_bdevs": 2, 00:34:59.458 "num_base_bdevs_discovered": 2, 00:34:59.458 "num_base_bdevs_operational": 2, 00:34:59.458 "base_bdevs_list": [ 00:34:59.458 { 00:34:59.458 "name": "BaseBdev1", 00:34:59.458 "uuid": "7b3cd543-52f9-46ee-8995-9ef7a1544c82", 00:34:59.458 "is_configured": true, 00:34:59.458 "data_offset": 256, 00:34:59.458 "data_size": 7936 00:34:59.458 }, 00:34:59.458 { 00:34:59.458 "name": "BaseBdev2", 00:34:59.458 "uuid": "af3c5165-92d9-4a2b-841f-f201319a77cf", 00:34:59.458 "is_configured": true, 00:34:59.458 "data_offset": 256, 00:34:59.458 "data_size": 7936 00:34:59.458 } 00:34:59.458 ] 00:34:59.458 } 00:34:59.458 } 00:34:59.458 }' 00:34:59.458 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:59.458 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:34:59.458 BaseBdev2' 00:34:59.458 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:59.458 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:34:59.458 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:34:59.717 "name": "BaseBdev1", 00:34:59.717 "aliases": [ 00:34:59.717 "7b3cd543-52f9-46ee-8995-9ef7a1544c82" 00:34:59.717 ], 00:34:59.717 "product_name": "Malloc disk", 00:34:59.717 "block_size": 4096, 00:34:59.717 "num_blocks": 8192, 00:34:59.717 "uuid": "7b3cd543-52f9-46ee-8995-9ef7a1544c82", 00:34:59.717 "assigned_rate_limits": { 00:34:59.717 "rw_ios_per_sec": 0, 00:34:59.717 "rw_mbytes_per_sec": 0, 00:34:59.717 "r_mbytes_per_sec": 0, 00:34:59.717 "w_mbytes_per_sec": 0 00:34:59.717 }, 00:34:59.717 "claimed": true, 00:34:59.717 "claim_type": "exclusive_write", 00:34:59.717 "zoned": false, 00:34:59.717 "supported_io_types": { 00:34:59.717 "read": true, 00:34:59.717 "write": true, 00:34:59.717 "unmap": true, 00:34:59.717 "write_zeroes": true, 00:34:59.717 "flush": true, 00:34:59.717 "reset": true, 00:34:59.717 "compare": false, 00:34:59.717 "compare_and_write": false, 00:34:59.717 "abort": true, 00:34:59.717 "nvme_admin": false, 00:34:59.717 "nvme_io": false 00:34:59.717 }, 00:34:59.717 "memory_domains": [ 00:34:59.717 { 00:34:59.717 "dma_device_id": "system", 00:34:59.717 "dma_device_type": 1 00:34:59.717 }, 00:34:59.717 { 00:34:59.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.717 "dma_device_type": 2 00:34:59.717 } 00:34:59.717 ], 00:34:59.717 "driver_specific": {} 00:34:59.717 }' 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:34:59.717 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:34:59.975 12:16:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:00.235 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:00.235 "name": "BaseBdev2", 00:35:00.235 "aliases": [ 00:35:00.235 "af3c5165-92d9-4a2b-841f-f201319a77cf" 00:35:00.235 ], 00:35:00.235 "product_name": "Malloc disk", 00:35:00.235 "block_size": 4096, 00:35:00.235 "num_blocks": 8192, 00:35:00.235 "uuid": "af3c5165-92d9-4a2b-841f-f201319a77cf", 00:35:00.235 "assigned_rate_limits": { 00:35:00.235 "rw_ios_per_sec": 0, 00:35:00.235 "rw_mbytes_per_sec": 0, 00:35:00.235 "r_mbytes_per_sec": 0, 00:35:00.235 "w_mbytes_per_sec": 0 00:35:00.235 }, 00:35:00.235 "claimed": true, 00:35:00.235 "claim_type": "exclusive_write", 00:35:00.235 "zoned": false, 00:35:00.235 "supported_io_types": { 00:35:00.235 "read": true, 00:35:00.235 "write": true, 00:35:00.235 "unmap": true, 00:35:00.235 "write_zeroes": true, 00:35:00.235 "flush": true, 00:35:00.235 "reset": true, 00:35:00.235 "compare": false, 00:35:00.235 "compare_and_write": false, 00:35:00.235 "abort": true, 00:35:00.235 "nvme_admin": false, 00:35:00.235 "nvme_io": false 00:35:00.235 }, 00:35:00.235 "memory_domains": [ 00:35:00.235 { 00:35:00.235 "dma_device_id": "system", 00:35:00.235 "dma_device_type": 1 00:35:00.235 }, 00:35:00.235 { 00:35:00.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:00.235 "dma_device_type": 2 00:35:00.235 } 00:35:00.235 ], 00:35:00.235 "driver_specific": {} 00:35:00.235 }' 00:35:00.235 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:00.235 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:00.494 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:00.753 [2024-07-21 12:16:59.551878] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:00.753 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:01.011 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:01.011 "name": "Existed_Raid", 00:35:01.011 "uuid": "acef82da-1f90-43ad-9ee3-00dc675e2089", 00:35:01.011 "strip_size_kb": 0, 00:35:01.011 "state": "online", 00:35:01.011 "raid_level": "raid1", 00:35:01.011 "superblock": true, 00:35:01.011 "num_base_bdevs": 2, 00:35:01.011 "num_base_bdevs_discovered": 1, 00:35:01.011 "num_base_bdevs_operational": 1, 00:35:01.011 "base_bdevs_list": [ 00:35:01.011 { 00:35:01.011 "name": null, 00:35:01.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.011 "is_configured": false, 00:35:01.011 "data_offset": 256, 00:35:01.011 "data_size": 7936 00:35:01.011 }, 00:35:01.011 { 00:35:01.011 "name": "BaseBdev2", 00:35:01.011 "uuid": "af3c5165-92d9-4a2b-841f-f201319a77cf", 00:35:01.011 "is_configured": true, 00:35:01.011 "data_offset": 256, 00:35:01.011 "data_size": 7936 00:35:01.011 } 00:35:01.011 ] 00:35:01.011 }' 00:35:01.011 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:01.011 12:16:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:01.578 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:35:01.578 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:01.578 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:01.578 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:02.144 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:02.144 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:02.144 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:02.144 [2024-07-21 12:17:00.957260] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:02.144 [2024-07-21 12:17:00.957582] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:02.144 [2024-07-21 12:17:00.970476] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:02.144 [2024-07-21 12:17:00.982390] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:02.144 [2024-07-21 12:17:00.982532] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:35:02.144 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:02.144 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:02.144 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:02.144 12:17:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 168923 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 168923 ']' 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 168923 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 168923 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 168923' 00:35:02.404 killing process with pid 168923 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@965 -- # kill 168923 00:35:02.404 [2024-07-21 12:17:01.198505] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:02.404 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # wait 168923 00:35:02.404 [2024-07-21 12:17:01.198754] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:02.662 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:35:02.662 00:35:02.662 real 0m10.482s 00:35:02.663 user 0m19.154s 00:35:02.663 sys 0m1.302s 00:35:02.663 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:02.663 12:17:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:02.663 ************************************ 00:35:02.663 END TEST raid_state_function_test_sb_4k 00:35:02.663 ************************************ 00:35:02.921 12:17:01 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:35:02.921 12:17:01 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:35:02.921 12:17:01 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:02.921 12:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:02.921 ************************************ 00:35:02.921 START TEST raid_superblock_test_4k 00:35:02.921 ************************************ 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=169277 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 169277 /var/tmp/spdk-raid.sock 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@827 -- # '[' -z 169277 ']' 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:02.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:02.921 12:17:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:02.921 [2024-07-21 12:17:01.639390] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:02.921 [2024-07-21 12:17:01.639755] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169277 ] 00:35:02.921 [2024-07-21 12:17:01.786256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:03.179 [2024-07-21 12:17:01.873406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:03.180 [2024-07-21 12:17:01.945337] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # return 0 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:03.745 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:35:04.003 malloc1 00:35:04.003 12:17:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:04.262 [2024-07-21 12:17:03.066414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:04.262 [2024-07-21 12:17:03.066730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:04.262 [2024-07-21 12:17:03.066822] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:35:04.262 [2024-07-21 12:17:03.067118] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:04.262 [2024-07-21 12:17:03.069739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:04.262 [2024-07-21 12:17:03.069917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:04.262 pt1 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:04.262 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:35:04.521 malloc2 00:35:04.521 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:04.779 [2024-07-21 12:17:03.476072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:04.779 [2024-07-21 12:17:03.476370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:04.779 [2024-07-21 12:17:03.476472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:04.779 [2024-07-21 12:17:03.476773] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:04.779 [2024-07-21 12:17:03.479246] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:04.779 [2024-07-21 12:17:03.479417] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:04.779 pt2 00:35:04.779 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:04.779 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:04.779 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:35:05.037 [2024-07-21 12:17:03.672226] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:05.037 [2024-07-21 12:17:03.674273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:05.037 [2024-07-21 12:17:03.674578] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:35:05.037 [2024-07-21 12:17:03.674686] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:05.037 [2024-07-21 12:17:03.674844] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:35:05.037 [2024-07-21 12:17:03.675336] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:35:05.037 [2024-07-21 12:17:03.675437] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:35:05.037 [2024-07-21 12:17:03.675643] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:05.037 "name": "raid_bdev1", 00:35:05.037 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:05.037 "strip_size_kb": 0, 00:35:05.037 "state": "online", 00:35:05.037 "raid_level": "raid1", 00:35:05.037 "superblock": true, 00:35:05.037 "num_base_bdevs": 2, 00:35:05.037 "num_base_bdevs_discovered": 2, 00:35:05.037 "num_base_bdevs_operational": 2, 00:35:05.037 "base_bdevs_list": [ 00:35:05.037 { 00:35:05.037 "name": "pt1", 00:35:05.037 "uuid": "c60800a4-796d-5327-8800-e2911ad9dbeb", 00:35:05.037 "is_configured": true, 00:35:05.037 "data_offset": 256, 00:35:05.037 "data_size": 7936 00:35:05.037 }, 00:35:05.037 { 00:35:05.037 "name": "pt2", 00:35:05.037 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:05.037 "is_configured": true, 00:35:05.037 "data_offset": 256, 00:35:05.037 "data_size": 7936 00:35:05.037 } 00:35:05.037 ] 00:35:05.037 }' 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:05.037 12:17:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:05.603 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:05.860 [2024-07-21 12:17:04.612517] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:05.861 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:05.861 "name": "raid_bdev1", 00:35:05.861 "aliases": [ 00:35:05.861 "0181027b-d1db-4272-8f84-d0e260439132" 00:35:05.861 ], 00:35:05.861 "product_name": "Raid Volume", 00:35:05.861 "block_size": 4096, 00:35:05.861 "num_blocks": 7936, 00:35:05.861 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:05.861 "assigned_rate_limits": { 00:35:05.861 "rw_ios_per_sec": 0, 00:35:05.861 "rw_mbytes_per_sec": 0, 00:35:05.861 "r_mbytes_per_sec": 0, 00:35:05.861 "w_mbytes_per_sec": 0 00:35:05.861 }, 00:35:05.861 "claimed": false, 00:35:05.861 "zoned": false, 00:35:05.861 "supported_io_types": { 00:35:05.861 "read": true, 00:35:05.861 "write": true, 00:35:05.861 "unmap": false, 00:35:05.861 "write_zeroes": true, 00:35:05.861 "flush": false, 00:35:05.861 "reset": true, 00:35:05.861 "compare": false, 00:35:05.861 "compare_and_write": false, 00:35:05.861 "abort": false, 00:35:05.861 "nvme_admin": false, 00:35:05.861 "nvme_io": false 00:35:05.861 }, 00:35:05.861 "memory_domains": [ 00:35:05.861 { 00:35:05.861 "dma_device_id": "system", 00:35:05.861 "dma_device_type": 1 00:35:05.861 }, 00:35:05.861 { 00:35:05.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.861 "dma_device_type": 2 00:35:05.861 }, 00:35:05.861 { 00:35:05.861 "dma_device_id": "system", 00:35:05.861 "dma_device_type": 1 00:35:05.861 }, 00:35:05.861 { 00:35:05.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.861 "dma_device_type": 2 00:35:05.861 } 00:35:05.861 ], 00:35:05.861 "driver_specific": { 00:35:05.861 "raid": { 00:35:05.861 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:05.861 "strip_size_kb": 0, 00:35:05.861 "state": "online", 00:35:05.861 "raid_level": "raid1", 00:35:05.861 "superblock": true, 00:35:05.861 "num_base_bdevs": 2, 00:35:05.861 "num_base_bdevs_discovered": 2, 00:35:05.861 "num_base_bdevs_operational": 2, 00:35:05.861 "base_bdevs_list": [ 00:35:05.861 { 00:35:05.861 "name": "pt1", 00:35:05.861 "uuid": "c60800a4-796d-5327-8800-e2911ad9dbeb", 00:35:05.861 "is_configured": true, 00:35:05.861 "data_offset": 256, 00:35:05.861 "data_size": 7936 00:35:05.861 }, 00:35:05.861 { 00:35:05.861 "name": "pt2", 00:35:05.861 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:05.861 "is_configured": true, 00:35:05.861 "data_offset": 256, 00:35:05.861 "data_size": 7936 00:35:05.861 } 00:35:05.861 ] 00:35:05.861 } 00:35:05.861 } 00:35:05.861 }' 00:35:05.861 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:05.861 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:05.861 pt2' 00:35:05.861 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:05.861 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:05.861 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:06.119 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:06.119 "name": "pt1", 00:35:06.119 "aliases": [ 00:35:06.119 "c60800a4-796d-5327-8800-e2911ad9dbeb" 00:35:06.119 ], 00:35:06.119 "product_name": "passthru", 00:35:06.119 "block_size": 4096, 00:35:06.119 "num_blocks": 8192, 00:35:06.119 "uuid": "c60800a4-796d-5327-8800-e2911ad9dbeb", 00:35:06.119 "assigned_rate_limits": { 00:35:06.119 "rw_ios_per_sec": 0, 00:35:06.119 "rw_mbytes_per_sec": 0, 00:35:06.119 "r_mbytes_per_sec": 0, 00:35:06.119 "w_mbytes_per_sec": 0 00:35:06.119 }, 00:35:06.119 "claimed": true, 00:35:06.119 "claim_type": "exclusive_write", 00:35:06.119 "zoned": false, 00:35:06.119 "supported_io_types": { 00:35:06.119 "read": true, 00:35:06.119 "write": true, 00:35:06.119 "unmap": true, 00:35:06.119 "write_zeroes": true, 00:35:06.119 "flush": true, 00:35:06.119 "reset": true, 00:35:06.119 "compare": false, 00:35:06.119 "compare_and_write": false, 00:35:06.119 "abort": true, 00:35:06.119 "nvme_admin": false, 00:35:06.119 "nvme_io": false 00:35:06.119 }, 00:35:06.119 "memory_domains": [ 00:35:06.119 { 00:35:06.119 "dma_device_id": "system", 00:35:06.119 "dma_device_type": 1 00:35:06.119 }, 00:35:06.119 { 00:35:06.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.119 "dma_device_type": 2 00:35:06.119 } 00:35:06.119 ], 00:35:06.119 "driver_specific": { 00:35:06.119 "passthru": { 00:35:06.119 "name": "pt1", 00:35:06.119 "base_bdev_name": "malloc1" 00:35:06.119 } 00:35:06.119 } 00:35:06.119 }' 00:35:06.119 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:06.119 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:06.119 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:06.119 12:17:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:06.376 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:06.633 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:06.633 "name": "pt2", 00:35:06.633 "aliases": [ 00:35:06.633 "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca" 00:35:06.633 ], 00:35:06.633 "product_name": "passthru", 00:35:06.633 "block_size": 4096, 00:35:06.633 "num_blocks": 8192, 00:35:06.633 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:06.633 "assigned_rate_limits": { 00:35:06.633 "rw_ios_per_sec": 0, 00:35:06.633 "rw_mbytes_per_sec": 0, 00:35:06.633 "r_mbytes_per_sec": 0, 00:35:06.633 "w_mbytes_per_sec": 0 00:35:06.633 }, 00:35:06.633 "claimed": true, 00:35:06.633 "claim_type": "exclusive_write", 00:35:06.633 "zoned": false, 00:35:06.634 "supported_io_types": { 00:35:06.634 "read": true, 00:35:06.634 "write": true, 00:35:06.634 "unmap": true, 00:35:06.634 "write_zeroes": true, 00:35:06.634 "flush": true, 00:35:06.634 "reset": true, 00:35:06.634 "compare": false, 00:35:06.634 "compare_and_write": false, 00:35:06.634 "abort": true, 00:35:06.634 "nvme_admin": false, 00:35:06.634 "nvme_io": false 00:35:06.634 }, 00:35:06.634 "memory_domains": [ 00:35:06.634 { 00:35:06.634 "dma_device_id": "system", 00:35:06.634 "dma_device_type": 1 00:35:06.634 }, 00:35:06.634 { 00:35:06.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:06.634 "dma_device_type": 2 00:35:06.634 } 00:35:06.634 ], 00:35:06.634 "driver_specific": { 00:35:06.634 "passthru": { 00:35:06.634 "name": "pt2", 00:35:06.634 "base_bdev_name": "malloc2" 00:35:06.634 } 00:35:06.634 } 00:35:06.634 }' 00:35:06.634 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:06.634 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:06.890 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:07.148 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:07.148 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:07.148 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:35:07.148 [2024-07-21 12:17:05.956701] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:07.148 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0181027b-d1db-4272-8f84-d0e260439132 00:35:07.148 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 0181027b-d1db-4272-8f84-d0e260439132 ']' 00:35:07.148 12:17:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:07.408 [2024-07-21 12:17:06.236600] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:07.408 [2024-07-21 12:17:06.236741] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:07.408 [2024-07-21 12:17:06.236954] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:07.408 [2024-07-21 12:17:06.237140] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:07.408 [2024-07-21 12:17:06.237285] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:35:07.408 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:07.408 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:35:07.679 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:35:07.679 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:35:07.679 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:07.679 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:07.952 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:35:07.952 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:08.210 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:35:08.210 12:17:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:08.468 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:35:08.726 [2024-07-21 12:17:07.340795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:08.726 [2024-07-21 12:17:07.343126] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:08.726 [2024-07-21 12:17:07.343332] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:08.726 [2024-07-21 12:17:07.343534] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:08.726 [2024-07-21 12:17:07.343682] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:08.726 [2024-07-21 12:17:07.343828] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:35:08.726 request: 00:35:08.726 { 00:35:08.726 "name": "raid_bdev1", 00:35:08.726 "raid_level": "raid1", 00:35:08.726 "base_bdevs": [ 00:35:08.726 "malloc1", 00:35:08.726 "malloc2" 00:35:08.726 ], 00:35:08.726 "superblock": false, 00:35:08.726 "method": "bdev_raid_create", 00:35:08.726 "req_id": 1 00:35:08.726 } 00:35:08.726 Got JSON-RPC error response 00:35:08.726 response: 00:35:08.726 { 00:35:08.726 "code": -17, 00:35:08.726 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:08.726 } 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:35:08.726 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:08.984 [2024-07-21 12:17:07.792827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:08.984 [2024-07-21 12:17:07.793049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:08.984 [2024-07-21 12:17:07.793194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:08.984 [2024-07-21 12:17:07.793357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:08.984 [2024-07-21 12:17:07.795796] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:08.984 [2024-07-21 12:17:07.795958] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:08.984 [2024-07-21 12:17:07.796139] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:08.984 [2024-07-21 12:17:07.796297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:08.984 pt1 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:08.984 12:17:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:09.242 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:09.242 "name": "raid_bdev1", 00:35:09.242 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:09.242 "strip_size_kb": 0, 00:35:09.242 "state": "configuring", 00:35:09.242 "raid_level": "raid1", 00:35:09.242 "superblock": true, 00:35:09.242 "num_base_bdevs": 2, 00:35:09.242 "num_base_bdevs_discovered": 1, 00:35:09.242 "num_base_bdevs_operational": 2, 00:35:09.242 "base_bdevs_list": [ 00:35:09.242 { 00:35:09.242 "name": "pt1", 00:35:09.242 "uuid": "c60800a4-796d-5327-8800-e2911ad9dbeb", 00:35:09.242 "is_configured": true, 00:35:09.242 "data_offset": 256, 00:35:09.242 "data_size": 7936 00:35:09.242 }, 00:35:09.242 { 00:35:09.242 "name": null, 00:35:09.242 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:09.242 "is_configured": false, 00:35:09.242 "data_offset": 256, 00:35:09.242 "data_size": 7936 00:35:09.242 } 00:35:09.242 ] 00:35:09.242 }' 00:35:09.242 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:09.242 12:17:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:09.808 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:35:09.808 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:35:09.808 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:09.808 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:10.067 [2024-07-21 12:17:08.841016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:10.067 [2024-07-21 12:17:08.841208] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:10.067 [2024-07-21 12:17:08.841479] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:35:10.067 [2024-07-21 12:17:08.841650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:10.067 [2024-07-21 12:17:08.842221] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:10.067 [2024-07-21 12:17:08.842429] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:10.067 [2024-07-21 12:17:08.842615] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:10.067 [2024-07-21 12:17:08.842674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:10.067 [2024-07-21 12:17:08.842903] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:35:10.067 [2024-07-21 12:17:08.843109] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:10.067 [2024-07-21 12:17:08.843217] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:35:10.067 [2024-07-21 12:17:08.843656] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:35:10.067 [2024-07-21 12:17:08.843791] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:35:10.067 [2024-07-21 12:17:08.843972] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:10.067 pt2 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:10.067 12:17:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:10.325 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:10.325 "name": "raid_bdev1", 00:35:10.325 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:10.325 "strip_size_kb": 0, 00:35:10.325 "state": "online", 00:35:10.325 "raid_level": "raid1", 00:35:10.325 "superblock": true, 00:35:10.325 "num_base_bdevs": 2, 00:35:10.325 "num_base_bdevs_discovered": 2, 00:35:10.325 "num_base_bdevs_operational": 2, 00:35:10.325 "base_bdevs_list": [ 00:35:10.325 { 00:35:10.325 "name": "pt1", 00:35:10.325 "uuid": "c60800a4-796d-5327-8800-e2911ad9dbeb", 00:35:10.325 "is_configured": true, 00:35:10.325 "data_offset": 256, 00:35:10.325 "data_size": 7936 00:35:10.325 }, 00:35:10.325 { 00:35:10.325 "name": "pt2", 00:35:10.325 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:10.325 "is_configured": true, 00:35:10.325 "data_offset": 256, 00:35:10.325 "data_size": 7936 00:35:10.325 } 00:35:10.326 ] 00:35:10.326 }' 00:35:10.326 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:10.326 12:17:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:10.892 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:11.150 [2024-07-21 12:17:09.909428] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:11.150 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:11.150 "name": "raid_bdev1", 00:35:11.150 "aliases": [ 00:35:11.150 "0181027b-d1db-4272-8f84-d0e260439132" 00:35:11.150 ], 00:35:11.150 "product_name": "Raid Volume", 00:35:11.150 "block_size": 4096, 00:35:11.150 "num_blocks": 7936, 00:35:11.150 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:11.150 "assigned_rate_limits": { 00:35:11.150 "rw_ios_per_sec": 0, 00:35:11.150 "rw_mbytes_per_sec": 0, 00:35:11.150 "r_mbytes_per_sec": 0, 00:35:11.150 "w_mbytes_per_sec": 0 00:35:11.150 }, 00:35:11.150 "claimed": false, 00:35:11.150 "zoned": false, 00:35:11.150 "supported_io_types": { 00:35:11.150 "read": true, 00:35:11.150 "write": true, 00:35:11.150 "unmap": false, 00:35:11.151 "write_zeroes": true, 00:35:11.151 "flush": false, 00:35:11.151 "reset": true, 00:35:11.151 "compare": false, 00:35:11.151 "compare_and_write": false, 00:35:11.151 "abort": false, 00:35:11.151 "nvme_admin": false, 00:35:11.151 "nvme_io": false 00:35:11.151 }, 00:35:11.151 "memory_domains": [ 00:35:11.151 { 00:35:11.151 "dma_device_id": "system", 00:35:11.151 "dma_device_type": 1 00:35:11.151 }, 00:35:11.151 { 00:35:11.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.151 "dma_device_type": 2 00:35:11.151 }, 00:35:11.151 { 00:35:11.151 "dma_device_id": "system", 00:35:11.151 "dma_device_type": 1 00:35:11.151 }, 00:35:11.151 { 00:35:11.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.151 "dma_device_type": 2 00:35:11.151 } 00:35:11.151 ], 00:35:11.151 "driver_specific": { 00:35:11.151 "raid": { 00:35:11.151 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:11.151 "strip_size_kb": 0, 00:35:11.151 "state": "online", 00:35:11.151 "raid_level": "raid1", 00:35:11.151 "superblock": true, 00:35:11.151 "num_base_bdevs": 2, 00:35:11.151 "num_base_bdevs_discovered": 2, 00:35:11.151 "num_base_bdevs_operational": 2, 00:35:11.151 "base_bdevs_list": [ 00:35:11.151 { 00:35:11.151 "name": "pt1", 00:35:11.151 "uuid": "c60800a4-796d-5327-8800-e2911ad9dbeb", 00:35:11.151 "is_configured": true, 00:35:11.151 "data_offset": 256, 00:35:11.151 "data_size": 7936 00:35:11.151 }, 00:35:11.151 { 00:35:11.151 "name": "pt2", 00:35:11.151 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:11.151 "is_configured": true, 00:35:11.151 "data_offset": 256, 00:35:11.151 "data_size": 7936 00:35:11.151 } 00:35:11.151 ] 00:35:11.151 } 00:35:11.151 } 00:35:11.151 }' 00:35:11.151 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:11.151 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:35:11.151 pt2' 00:35:11.151 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:11.151 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:35:11.151 12:17:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:11.409 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:11.409 "name": "pt1", 00:35:11.409 "aliases": [ 00:35:11.409 "c60800a4-796d-5327-8800-e2911ad9dbeb" 00:35:11.409 ], 00:35:11.409 "product_name": "passthru", 00:35:11.409 "block_size": 4096, 00:35:11.409 "num_blocks": 8192, 00:35:11.409 "uuid": "c60800a4-796d-5327-8800-e2911ad9dbeb", 00:35:11.409 "assigned_rate_limits": { 00:35:11.409 "rw_ios_per_sec": 0, 00:35:11.409 "rw_mbytes_per_sec": 0, 00:35:11.409 "r_mbytes_per_sec": 0, 00:35:11.409 "w_mbytes_per_sec": 0 00:35:11.409 }, 00:35:11.409 "claimed": true, 00:35:11.409 "claim_type": "exclusive_write", 00:35:11.409 "zoned": false, 00:35:11.409 "supported_io_types": { 00:35:11.409 "read": true, 00:35:11.409 "write": true, 00:35:11.409 "unmap": true, 00:35:11.409 "write_zeroes": true, 00:35:11.409 "flush": true, 00:35:11.409 "reset": true, 00:35:11.409 "compare": false, 00:35:11.409 "compare_and_write": false, 00:35:11.410 "abort": true, 00:35:11.410 "nvme_admin": false, 00:35:11.410 "nvme_io": false 00:35:11.410 }, 00:35:11.410 "memory_domains": [ 00:35:11.410 { 00:35:11.410 "dma_device_id": "system", 00:35:11.410 "dma_device_type": 1 00:35:11.410 }, 00:35:11.410 { 00:35:11.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.410 "dma_device_type": 2 00:35:11.410 } 00:35:11.410 ], 00:35:11.410 "driver_specific": { 00:35:11.410 "passthru": { 00:35:11.410 "name": "pt1", 00:35:11.410 "base_bdev_name": "malloc1" 00:35:11.410 } 00:35:11.410 } 00:35:11.410 }' 00:35:11.410 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:11.410 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:11.668 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:11.927 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:11.927 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:11.927 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:11.927 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:35:11.927 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:12.186 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:12.186 "name": "pt2", 00:35:12.186 "aliases": [ 00:35:12.186 "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca" 00:35:12.186 ], 00:35:12.186 "product_name": "passthru", 00:35:12.186 "block_size": 4096, 00:35:12.186 "num_blocks": 8192, 00:35:12.186 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:12.186 "assigned_rate_limits": { 00:35:12.186 "rw_ios_per_sec": 0, 00:35:12.186 "rw_mbytes_per_sec": 0, 00:35:12.186 "r_mbytes_per_sec": 0, 00:35:12.186 "w_mbytes_per_sec": 0 00:35:12.186 }, 00:35:12.186 "claimed": true, 00:35:12.186 "claim_type": "exclusive_write", 00:35:12.186 "zoned": false, 00:35:12.186 "supported_io_types": { 00:35:12.186 "read": true, 00:35:12.186 "write": true, 00:35:12.186 "unmap": true, 00:35:12.186 "write_zeroes": true, 00:35:12.186 "flush": true, 00:35:12.186 "reset": true, 00:35:12.186 "compare": false, 00:35:12.186 "compare_and_write": false, 00:35:12.186 "abort": true, 00:35:12.186 "nvme_admin": false, 00:35:12.186 "nvme_io": false 00:35:12.186 }, 00:35:12.186 "memory_domains": [ 00:35:12.186 { 00:35:12.186 "dma_device_id": "system", 00:35:12.186 "dma_device_type": 1 00:35:12.186 }, 00:35:12.186 { 00:35:12.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:12.186 "dma_device_type": 2 00:35:12.186 } 00:35:12.186 ], 00:35:12.186 "driver_specific": { 00:35:12.186 "passthru": { 00:35:12.186 "name": "pt2", 00:35:12.186 "base_bdev_name": "malloc2" 00:35:12.186 } 00:35:12.186 } 00:35:12.186 }' 00:35:12.186 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:12.186 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:12.186 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:12.186 12:17:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:12.186 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:12.444 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:35:12.702 [2024-07-21 12:17:11.521731] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:12.702 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 0181027b-d1db-4272-8f84-d0e260439132 '!=' 0181027b-d1db-4272-8f84-d0e260439132 ']' 00:35:12.702 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:35:12.702 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:12.702 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:35:12.702 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:35:12.960 [2024-07-21 12:17:11.781609] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:12.960 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:13.218 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:13.218 "name": "raid_bdev1", 00:35:13.218 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:13.218 "strip_size_kb": 0, 00:35:13.218 "state": "online", 00:35:13.218 "raid_level": "raid1", 00:35:13.218 "superblock": true, 00:35:13.218 "num_base_bdevs": 2, 00:35:13.218 "num_base_bdevs_discovered": 1, 00:35:13.218 "num_base_bdevs_operational": 1, 00:35:13.218 "base_bdevs_list": [ 00:35:13.218 { 00:35:13.218 "name": null, 00:35:13.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.218 "is_configured": false, 00:35:13.218 "data_offset": 256, 00:35:13.218 "data_size": 7936 00:35:13.218 }, 00:35:13.218 { 00:35:13.218 "name": "pt2", 00:35:13.218 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:13.218 "is_configured": true, 00:35:13.218 "data_offset": 256, 00:35:13.218 "data_size": 7936 00:35:13.218 } 00:35:13.218 ] 00:35:13.218 }' 00:35:13.218 12:17:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:13.218 12:17:11 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:13.783 12:17:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:14.042 [2024-07-21 12:17:12.817863] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:14.042 [2024-07-21 12:17:12.818032] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:14.042 [2024-07-21 12:17:12.818232] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:14.042 [2024-07-21 12:17:12.818408] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:14.042 [2024-07-21 12:17:12.818516] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:35:14.042 12:17:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:14.042 12:17:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:35:14.310 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:35:14.310 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:35:14.310 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:35:14.310 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:14.310 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:35:14.567 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:35:14.567 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:35:14.567 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:35:14.567 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:35:14.567 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:35:14.567 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:14.825 [2024-07-21 12:17:13.513936] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:14.825 [2024-07-21 12:17:13.514161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:14.825 [2024-07-21 12:17:13.514240] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:35:14.825 [2024-07-21 12:17:13.514529] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:14.825 [2024-07-21 12:17:13.516735] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:14.825 [2024-07-21 12:17:13.516899] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:14.825 [2024-07-21 12:17:13.517108] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:14.825 [2024-07-21 12:17:13.517247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:14.825 [2024-07-21 12:17:13.517424] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:35:14.825 [2024-07-21 12:17:13.517581] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:14.825 [2024-07-21 12:17:13.517719] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:35:14.825 [2024-07-21 12:17:13.518176] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:35:14.825 [2024-07-21 12:17:13.518316] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:35:14.825 [2024-07-21 12:17:13.518539] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:14.825 pt2 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:14.825 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:15.082 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:15.082 "name": "raid_bdev1", 00:35:15.082 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:15.082 "strip_size_kb": 0, 00:35:15.082 "state": "online", 00:35:15.082 "raid_level": "raid1", 00:35:15.082 "superblock": true, 00:35:15.082 "num_base_bdevs": 2, 00:35:15.082 "num_base_bdevs_discovered": 1, 00:35:15.082 "num_base_bdevs_operational": 1, 00:35:15.082 "base_bdevs_list": [ 00:35:15.082 { 00:35:15.082 "name": null, 00:35:15.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:15.082 "is_configured": false, 00:35:15.082 "data_offset": 256, 00:35:15.082 "data_size": 7936 00:35:15.082 }, 00:35:15.082 { 00:35:15.082 "name": "pt2", 00:35:15.082 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:15.082 "is_configured": true, 00:35:15.082 "data_offset": 256, 00:35:15.082 "data_size": 7936 00:35:15.082 } 00:35:15.082 ] 00:35:15.082 }' 00:35:15.082 12:17:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:15.082 12:17:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:15.647 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:15.647 [2024-07-21 12:17:14.462663] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:15.647 [2024-07-21 12:17:14.462800] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:15.648 [2024-07-21 12:17:14.462976] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:15.648 [2024-07-21 12:17:14.463128] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:15.648 [2024-07-21 12:17:14.463227] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:35:15.648 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:15.648 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:35:15.905 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:35:15.905 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:35:15.905 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:35:15.905 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:16.163 [2024-07-21 12:17:14.954735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:16.163 [2024-07-21 12:17:14.954948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:16.163 [2024-07-21 12:17:14.955094] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:35:16.163 [2024-07-21 12:17:14.955212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:16.163 [2024-07-21 12:17:14.957604] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:16.163 [2024-07-21 12:17:14.957771] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:16.163 [2024-07-21 12:17:14.957988] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:16.163 [2024-07-21 12:17:14.958124] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:16.163 [2024-07-21 12:17:14.958363] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:16.163 [2024-07-21 12:17:14.958487] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:16.163 [2024-07-21 12:17:14.958620] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:35:16.163 [2024-07-21 12:17:14.958767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:16.163 [2024-07-21 12:17:14.958968] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:35:16.163 [2024-07-21 12:17:14.959075] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:16.163 [2024-07-21 12:17:14.959237] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:35:16.163 [2024-07-21 12:17:14.959643] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:35:16.163 [2024-07-21 12:17:14.959762] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:35:16.163 [2024-07-21 12:17:14.959989] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:16.163 pt1 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:16.163 12:17:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:16.421 12:17:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:16.421 "name": "raid_bdev1", 00:35:16.421 "uuid": "0181027b-d1db-4272-8f84-d0e260439132", 00:35:16.421 "strip_size_kb": 0, 00:35:16.421 "state": "online", 00:35:16.421 "raid_level": "raid1", 00:35:16.421 "superblock": true, 00:35:16.421 "num_base_bdevs": 2, 00:35:16.421 "num_base_bdevs_discovered": 1, 00:35:16.421 "num_base_bdevs_operational": 1, 00:35:16.421 "base_bdevs_list": [ 00:35:16.421 { 00:35:16.421 "name": null, 00:35:16.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:16.421 "is_configured": false, 00:35:16.421 "data_offset": 256, 00:35:16.421 "data_size": 7936 00:35:16.421 }, 00:35:16.421 { 00:35:16.421 "name": "pt2", 00:35:16.421 "uuid": "4f3ab857-aeeb-506d-abc9-b2d0cb19eaca", 00:35:16.421 "is_configured": true, 00:35:16.421 "data_offset": 256, 00:35:16.421 "data_size": 7936 00:35:16.421 } 00:35:16.421 ] 00:35:16.421 }' 00:35:16.421 12:17:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:16.421 12:17:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:16.987 12:17:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:35:16.987 12:17:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:17.246 12:17:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:35:17.246 12:17:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:17.246 12:17:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:35:17.505 [2024-07-21 12:17:16.219503] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 0181027b-d1db-4272-8f84-d0e260439132 '!=' 0181027b-d1db-4272-8f84-d0e260439132 ']' 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 169277 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@946 -- # '[' -z 169277 ']' 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # kill -0 169277 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # uname 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 169277 00:35:17.505 killing process with pid 169277 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 169277' 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@965 -- # kill 169277 00:35:17.505 [2024-07-21 12:17:16.260088] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:17.505 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # wait 169277 00:35:17.505 [2024-07-21 12:17:16.260166] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:17.505 [2024-07-21 12:17:16.260223] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:17.505 [2024-07-21 12:17:16.260233] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:35:17.505 [2024-07-21 12:17:16.285101] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:17.764 12:17:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:35:17.764 00:35:17.764 real 0m14.994s 00:35:17.764 user 0m28.112s 00:35:17.764 sys 0m1.838s 00:35:17.764 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:17.764 12:17:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:35:17.764 ************************************ 00:35:17.764 END TEST raid_superblock_test_4k 00:35:17.764 ************************************ 00:35:17.764 12:17:16 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' true = true ']' 00:35:17.764 12:17:16 bdev_raid -- bdev/bdev_raid.sh@901 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:35:17.764 12:17:16 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:35:17.764 12:17:16 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:17.764 12:17:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:18.023 ************************************ 00:35:18.023 START TEST raid_rebuild_test_sb_4k 00:35:18.023 ************************************ 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local verify=true 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:18.023 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local strip_size 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local create_arg 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local data_offset 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # raid_pid=169791 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # waitforlisten 169791 /var/tmp/spdk-raid.sock 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@827 -- # '[' -z 169791 ']' 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:18.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:18.024 12:17:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:18.024 [2024-07-21 12:17:16.716163] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:18.024 [2024-07-21 12:17:16.716607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid169791 ] 00:35:18.024 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:18.024 Zero copy mechanism will not be used. 00:35:18.024 [2024-07-21 12:17:16.883493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:18.283 [2024-07-21 12:17:16.954777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:18.283 [2024-07-21 12:17:17.024115] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:18.850 12:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:18.850 12:17:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # return 0 00:35:18.850 12:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:18.850 12:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:35:19.109 BaseBdev1_malloc 00:35:19.109 12:17:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:19.368 [2024-07-21 12:17:18.127710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:19.368 [2024-07-21 12:17:18.128052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.368 [2024-07-21 12:17:18.128221] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:35:19.368 [2024-07-21 12:17:18.128415] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.368 [2024-07-21 12:17:18.131258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.368 [2024-07-21 12:17:18.131435] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:19.368 BaseBdev1 00:35:19.368 12:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:35:19.368 12:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:35:19.627 BaseBdev2_malloc 00:35:19.627 12:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:19.886 [2024-07-21 12:17:18.600808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:19.886 [2024-07-21 12:17:18.601046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:19.886 [2024-07-21 12:17:18.601145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:35:19.886 [2024-07-21 12:17:18.601416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:19.886 [2024-07-21 12:17:18.603860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:19.886 [2024-07-21 12:17:18.604021] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:19.886 BaseBdev2 00:35:19.886 12:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b spare_malloc 00:35:20.144 spare_malloc 00:35:20.144 12:17:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:20.401 spare_delay 00:35:20.402 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:20.402 [2024-07-21 12:17:19.244417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:20.402 [2024-07-21 12:17:19.244654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:20.402 [2024-07-21 12:17:19.244741] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:35:20.402 [2024-07-21 12:17:19.244914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:20.402 [2024-07-21 12:17:19.247431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:20.402 [2024-07-21 12:17:19.247599] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:20.402 spare 00:35:20.402 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:35:20.660 [2024-07-21 12:17:19.496546] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:20.660 [2024-07-21 12:17:19.498727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:20.660 [2024-07-21 12:17:19.499055] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:35:20.660 [2024-07-21 12:17:19.499175] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:20.660 [2024-07-21 12:17:19.499360] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:35:20.660 [2024-07-21 12:17:19.499909] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:35:20.660 [2024-07-21 12:17:19.500035] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:35:20.660 [2024-07-21 12:17:19.500259] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:20.660 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.918 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:20.918 "name": "raid_bdev1", 00:35:20.918 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:20.918 "strip_size_kb": 0, 00:35:20.918 "state": "online", 00:35:20.918 "raid_level": "raid1", 00:35:20.918 "superblock": true, 00:35:20.918 "num_base_bdevs": 2, 00:35:20.918 "num_base_bdevs_discovered": 2, 00:35:20.918 "num_base_bdevs_operational": 2, 00:35:20.918 "base_bdevs_list": [ 00:35:20.918 { 00:35:20.918 "name": "BaseBdev1", 00:35:20.918 "uuid": "848c4c62-461d-5a34-9dee-526812491f9b", 00:35:20.918 "is_configured": true, 00:35:20.918 "data_offset": 256, 00:35:20.918 "data_size": 7936 00:35:20.918 }, 00:35:20.918 { 00:35:20.918 "name": "BaseBdev2", 00:35:20.918 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:20.918 "is_configured": true, 00:35:20.918 "data_offset": 256, 00:35:20.918 "data_size": 7936 00:35:20.918 } 00:35:20.918 ] 00:35:20.918 }' 00:35:20.918 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:20.918 12:17:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:21.485 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:35:21.485 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:35:21.744 [2024-07-21 12:17:20.568881] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:21.744 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:35:21.744 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:21.744 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:22.003 12:17:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:22.262 [2024-07-21 12:17:21.016824] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:35:22.262 /dev/nbd0 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:22.262 1+0 records in 00:35:22.262 1+0 records out 00:35:22.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512625 s, 8.0 MB/s 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:35:22.262 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:35:23.198 7936+0 records in 00:35:23.198 7936+0 records out 00:35:23.198 32505856 bytes (33 MB, 31 MiB) copied, 0.680133 s, 47.8 MB/s 00:35:23.198 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:35:23.198 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:23.198 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:23.198 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:23.198 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:35:23.198 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:23.198 12:17:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:23.198 [2024-07-21 12:17:22.016615] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:23.198 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:23.199 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:35:23.457 [2024-07-21 12:17:22.268366] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:23.457 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.715 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:23.715 "name": "raid_bdev1", 00:35:23.715 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:23.715 "strip_size_kb": 0, 00:35:23.715 "state": "online", 00:35:23.715 "raid_level": "raid1", 00:35:23.715 "superblock": true, 00:35:23.715 "num_base_bdevs": 2, 00:35:23.715 "num_base_bdevs_discovered": 1, 00:35:23.715 "num_base_bdevs_operational": 1, 00:35:23.715 "base_bdevs_list": [ 00:35:23.715 { 00:35:23.715 "name": null, 00:35:23.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:23.715 "is_configured": false, 00:35:23.715 "data_offset": 256, 00:35:23.715 "data_size": 7936 00:35:23.715 }, 00:35:23.715 { 00:35:23.715 "name": "BaseBdev2", 00:35:23.715 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:23.715 "is_configured": true, 00:35:23.715 "data_offset": 256, 00:35:23.715 "data_size": 7936 00:35:23.715 } 00:35:23.715 ] 00:35:23.715 }' 00:35:23.715 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:23.715 12:17:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:24.289 12:17:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:24.547 [2024-07-21 12:17:23.356540] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:24.547 [2024-07-21 12:17:23.363365] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019fe30 00:35:24.547 [2024-07-21 12:17:23.365602] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:24.547 12:17:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # sleep 1 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:25.920 "name": "raid_bdev1", 00:35:25.920 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:25.920 "strip_size_kb": 0, 00:35:25.920 "state": "online", 00:35:25.920 "raid_level": "raid1", 00:35:25.920 "superblock": true, 00:35:25.920 "num_base_bdevs": 2, 00:35:25.920 "num_base_bdevs_discovered": 2, 00:35:25.920 "num_base_bdevs_operational": 2, 00:35:25.920 "process": { 00:35:25.920 "type": "rebuild", 00:35:25.920 "target": "spare", 00:35:25.920 "progress": { 00:35:25.920 "blocks": 3072, 00:35:25.920 "percent": 38 00:35:25.920 } 00:35:25.920 }, 00:35:25.920 "base_bdevs_list": [ 00:35:25.920 { 00:35:25.920 "name": "spare", 00:35:25.920 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:25.920 "is_configured": true, 00:35:25.920 "data_offset": 256, 00:35:25.920 "data_size": 7936 00:35:25.920 }, 00:35:25.920 { 00:35:25.920 "name": "BaseBdev2", 00:35:25.920 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:25.920 "is_configured": true, 00:35:25.920 "data_offset": 256, 00:35:25.920 "data_size": 7936 00:35:25.920 } 00:35:25.920 ] 00:35:25.920 }' 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:25.920 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:26.179 [2024-07-21 12:17:24.920059] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:26.179 [2024-07-21 12:17:24.976712] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:26.179 [2024-07-21 12:17:24.976930] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:26.179 [2024-07-21 12:17:24.976986] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:26.179 [2024-07-21 12:17:24.977090] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:26.179 12:17:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:26.179 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:26.179 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:26.437 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:26.437 "name": "raid_bdev1", 00:35:26.437 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:26.437 "strip_size_kb": 0, 00:35:26.437 "state": "online", 00:35:26.437 "raid_level": "raid1", 00:35:26.437 "superblock": true, 00:35:26.437 "num_base_bdevs": 2, 00:35:26.437 "num_base_bdevs_discovered": 1, 00:35:26.437 "num_base_bdevs_operational": 1, 00:35:26.437 "base_bdevs_list": [ 00:35:26.437 { 00:35:26.437 "name": null, 00:35:26.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:26.437 "is_configured": false, 00:35:26.437 "data_offset": 256, 00:35:26.437 "data_size": 7936 00:35:26.437 }, 00:35:26.437 { 00:35:26.437 "name": "BaseBdev2", 00:35:26.437 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:26.437 "is_configured": true, 00:35:26.437 "data_offset": 256, 00:35:26.437 "data_size": 7936 00:35:26.437 } 00:35:26.437 ] 00:35:26.437 }' 00:35:26.437 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:26.437 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:27.002 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:27.002 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:27.002 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:27.002 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:27.002 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:27.002 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:27.002 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:27.260 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:27.260 "name": "raid_bdev1", 00:35:27.260 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:27.260 "strip_size_kb": 0, 00:35:27.260 "state": "online", 00:35:27.260 "raid_level": "raid1", 00:35:27.260 "superblock": true, 00:35:27.260 "num_base_bdevs": 2, 00:35:27.260 "num_base_bdevs_discovered": 1, 00:35:27.260 "num_base_bdevs_operational": 1, 00:35:27.260 "base_bdevs_list": [ 00:35:27.260 { 00:35:27.260 "name": null, 00:35:27.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:27.260 "is_configured": false, 00:35:27.260 "data_offset": 256, 00:35:27.260 "data_size": 7936 00:35:27.260 }, 00:35:27.260 { 00:35:27.260 "name": "BaseBdev2", 00:35:27.260 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:27.260 "is_configured": true, 00:35:27.260 "data_offset": 256, 00:35:27.260 "data_size": 7936 00:35:27.260 } 00:35:27.260 ] 00:35:27.260 }' 00:35:27.260 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:27.260 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:27.260 12:17:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:27.260 12:17:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:27.260 12:17:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:27.517 [2024-07-21 12:17:26.269372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:27.517 [2024-07-21 12:17:26.272722] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:35:27.517 [2024-07-21 12:17:26.274897] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:27.517 12:17:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # sleep 1 00:35:28.452 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:28.452 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:28.452 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:28.452 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:28.452 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:28.452 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.452 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.709 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:28.709 "name": "raid_bdev1", 00:35:28.709 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:28.709 "strip_size_kb": 0, 00:35:28.709 "state": "online", 00:35:28.709 "raid_level": "raid1", 00:35:28.709 "superblock": true, 00:35:28.709 "num_base_bdevs": 2, 00:35:28.709 "num_base_bdevs_discovered": 2, 00:35:28.709 "num_base_bdevs_operational": 2, 00:35:28.709 "process": { 00:35:28.709 "type": "rebuild", 00:35:28.709 "target": "spare", 00:35:28.710 "progress": { 00:35:28.710 "blocks": 3072, 00:35:28.710 "percent": 38 00:35:28.710 } 00:35:28.710 }, 00:35:28.710 "base_bdevs_list": [ 00:35:28.710 { 00:35:28.710 "name": "spare", 00:35:28.710 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:28.710 "is_configured": true, 00:35:28.710 "data_offset": 256, 00:35:28.710 "data_size": 7936 00:35:28.710 }, 00:35:28.710 { 00:35:28.710 "name": "BaseBdev2", 00:35:28.710 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:28.710 "is_configured": true, 00:35:28.710 "data_offset": 256, 00:35:28.710 "data_size": 7936 00:35:28.710 } 00:35:28.710 ] 00:35:28.710 }' 00:35:28.710 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:28.966 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:35:28.967 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@705 -- # local timeout=1332 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:28.967 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:29.225 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:29.225 "name": "raid_bdev1", 00:35:29.225 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:29.225 "strip_size_kb": 0, 00:35:29.225 "state": "online", 00:35:29.225 "raid_level": "raid1", 00:35:29.225 "superblock": true, 00:35:29.225 "num_base_bdevs": 2, 00:35:29.225 "num_base_bdevs_discovered": 2, 00:35:29.225 "num_base_bdevs_operational": 2, 00:35:29.225 "process": { 00:35:29.225 "type": "rebuild", 00:35:29.225 "target": "spare", 00:35:29.225 "progress": { 00:35:29.225 "blocks": 4096, 00:35:29.225 "percent": 51 00:35:29.225 } 00:35:29.225 }, 00:35:29.225 "base_bdevs_list": [ 00:35:29.225 { 00:35:29.225 "name": "spare", 00:35:29.225 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:29.225 "is_configured": true, 00:35:29.225 "data_offset": 256, 00:35:29.225 "data_size": 7936 00:35:29.225 }, 00:35:29.225 { 00:35:29.225 "name": "BaseBdev2", 00:35:29.225 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:29.225 "is_configured": true, 00:35:29.225 "data_offset": 256, 00:35:29.225 "data_size": 7936 00:35:29.225 } 00:35:29.225 ] 00:35:29.225 }' 00:35:29.225 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:29.225 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:29.225 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:29.225 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:29.225 12:17:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:30.160 12:17:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:30.160 12:17:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:30.160 12:17:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:30.160 12:17:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:30.160 12:17:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:30.160 12:17:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:30.160 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:30.160 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.418 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:30.418 "name": "raid_bdev1", 00:35:30.418 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:30.418 "strip_size_kb": 0, 00:35:30.418 "state": "online", 00:35:30.418 "raid_level": "raid1", 00:35:30.418 "superblock": true, 00:35:30.418 "num_base_bdevs": 2, 00:35:30.418 "num_base_bdevs_discovered": 2, 00:35:30.418 "num_base_bdevs_operational": 2, 00:35:30.418 "process": { 00:35:30.418 "type": "rebuild", 00:35:30.418 "target": "spare", 00:35:30.418 "progress": { 00:35:30.418 "blocks": 7424, 00:35:30.418 "percent": 93 00:35:30.418 } 00:35:30.418 }, 00:35:30.418 "base_bdevs_list": [ 00:35:30.418 { 00:35:30.418 "name": "spare", 00:35:30.419 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:30.419 "is_configured": true, 00:35:30.419 "data_offset": 256, 00:35:30.419 "data_size": 7936 00:35:30.419 }, 00:35:30.419 { 00:35:30.419 "name": "BaseBdev2", 00:35:30.419 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:30.419 "is_configured": true, 00:35:30.419 "data_offset": 256, 00:35:30.419 "data_size": 7936 00:35:30.419 } 00:35:30.419 ] 00:35:30.419 }' 00:35:30.419 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:30.677 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:30.677 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:30.677 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:30.677 12:17:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@710 -- # sleep 1 00:35:30.677 [2024-07-21 12:17:29.392763] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:30.677 [2024-07-21 12:17:29.393038] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:30.677 [2024-07-21 12:17:29.393332] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.615 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:31.881 "name": "raid_bdev1", 00:35:31.881 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:31.881 "strip_size_kb": 0, 00:35:31.881 "state": "online", 00:35:31.881 "raid_level": "raid1", 00:35:31.881 "superblock": true, 00:35:31.881 "num_base_bdevs": 2, 00:35:31.881 "num_base_bdevs_discovered": 2, 00:35:31.881 "num_base_bdevs_operational": 2, 00:35:31.881 "base_bdevs_list": [ 00:35:31.881 { 00:35:31.881 "name": "spare", 00:35:31.881 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:31.881 "is_configured": true, 00:35:31.881 "data_offset": 256, 00:35:31.881 "data_size": 7936 00:35:31.881 }, 00:35:31.881 { 00:35:31.881 "name": "BaseBdev2", 00:35:31.881 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:31.881 "is_configured": true, 00:35:31.881 "data_offset": 256, 00:35:31.881 "data_size": 7936 00:35:31.881 } 00:35:31.881 ] 00:35:31.881 }' 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # break 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:31.881 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.153 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:32.153 "name": "raid_bdev1", 00:35:32.153 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:32.153 "strip_size_kb": 0, 00:35:32.153 "state": "online", 00:35:32.153 "raid_level": "raid1", 00:35:32.153 "superblock": true, 00:35:32.153 "num_base_bdevs": 2, 00:35:32.153 "num_base_bdevs_discovered": 2, 00:35:32.153 "num_base_bdevs_operational": 2, 00:35:32.153 "base_bdevs_list": [ 00:35:32.153 { 00:35:32.153 "name": "spare", 00:35:32.153 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:32.153 "is_configured": true, 00:35:32.153 "data_offset": 256, 00:35:32.153 "data_size": 7936 00:35:32.153 }, 00:35:32.153 { 00:35:32.153 "name": "BaseBdev2", 00:35:32.153 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:32.153 "is_configured": true, 00:35:32.153 "data_offset": 256, 00:35:32.153 "data_size": 7936 00:35:32.153 } 00:35:32.153 ] 00:35:32.153 }' 00:35:32.153 12:17:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:32.410 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.667 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:32.668 "name": "raid_bdev1", 00:35:32.668 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:32.668 "strip_size_kb": 0, 00:35:32.668 "state": "online", 00:35:32.668 "raid_level": "raid1", 00:35:32.668 "superblock": true, 00:35:32.668 "num_base_bdevs": 2, 00:35:32.668 "num_base_bdevs_discovered": 2, 00:35:32.668 "num_base_bdevs_operational": 2, 00:35:32.668 "base_bdevs_list": [ 00:35:32.668 { 00:35:32.668 "name": "spare", 00:35:32.668 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:32.668 "is_configured": true, 00:35:32.668 "data_offset": 256, 00:35:32.668 "data_size": 7936 00:35:32.668 }, 00:35:32.668 { 00:35:32.668 "name": "BaseBdev2", 00:35:32.668 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:32.668 "is_configured": true, 00:35:32.668 "data_offset": 256, 00:35:32.668 "data_size": 7936 00:35:32.668 } 00:35:32.668 ] 00:35:32.668 }' 00:35:32.668 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:32.668 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:33.232 12:17:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:35:33.489 [2024-07-21 12:17:32.134371] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:33.489 [2024-07-21 12:17:32.134528] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:33.489 [2024-07-21 12:17:32.134783] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:33.489 [2024-07-21 12:17:32.134991] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:33.489 [2024-07-21 12:17:32.135093] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # jq length 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:33.489 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:33.745 /dev/nbd0 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:33.745 1+0 records in 00:35:33.745 1+0 records out 00:35:33.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446244 s, 9.2 MB/s 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:33.745 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:35:34.002 /dev/nbd1 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@865 -- # local i 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # break 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:34.002 1+0 records in 00:35:34.002 1+0 records out 00:35:34.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403165 s, 10.2 MB/s 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@882 -- # size=4096 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # return 0 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:34.002 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:35:34.260 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:35:34.260 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:35:34.260 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:34.260 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:34.260 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:35:34.260 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:34.260 12:17:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:35:34.518 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:34.775 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:35.033 [2024-07-21 12:17:33.819148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:35.033 [2024-07-21 12:17:33.819400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:35.033 [2024-07-21 12:17:33.819480] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:35:35.033 [2024-07-21 12:17:33.819780] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:35.033 [2024-07-21 12:17:33.822277] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:35.033 [2024-07-21 12:17:33.822452] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:35.033 [2024-07-21 12:17:33.822694] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:35.033 [2024-07-21 12:17:33.822871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:35.033 [2024-07-21 12:17:33.823164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:35.033 spare 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:35.033 12:17:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.292 [2024-07-21 12:17:33.923461] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:35:35.292 [2024-07-21 12:17:33.923601] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:35.292 [2024-07-21 12:17:33.923754] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:35:35.292 [2024-07-21 12:17:33.924408] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:35:35.292 [2024-07-21 12:17:33.924532] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:35:35.292 [2024-07-21 12:17:33.924771] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:35.292 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:35.292 "name": "raid_bdev1", 00:35:35.292 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:35.292 "strip_size_kb": 0, 00:35:35.292 "state": "online", 00:35:35.292 "raid_level": "raid1", 00:35:35.292 "superblock": true, 00:35:35.292 "num_base_bdevs": 2, 00:35:35.292 "num_base_bdevs_discovered": 2, 00:35:35.292 "num_base_bdevs_operational": 2, 00:35:35.292 "base_bdevs_list": [ 00:35:35.292 { 00:35:35.292 "name": "spare", 00:35:35.292 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:35.292 "is_configured": true, 00:35:35.292 "data_offset": 256, 00:35:35.292 "data_size": 7936 00:35:35.292 }, 00:35:35.292 { 00:35:35.292 "name": "BaseBdev2", 00:35:35.292 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:35.292 "is_configured": true, 00:35:35.292 "data_offset": 256, 00:35:35.292 "data_size": 7936 00:35:35.292 } 00:35:35.292 ] 00:35:35.292 }' 00:35:35.292 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:35.292 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:35.858 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:35.858 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:35.858 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:35.858 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:35.858 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:35.858 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.858 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.116 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:36.116 "name": "raid_bdev1", 00:35:36.116 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:36.116 "strip_size_kb": 0, 00:35:36.116 "state": "online", 00:35:36.116 "raid_level": "raid1", 00:35:36.116 "superblock": true, 00:35:36.116 "num_base_bdevs": 2, 00:35:36.116 "num_base_bdevs_discovered": 2, 00:35:36.116 "num_base_bdevs_operational": 2, 00:35:36.116 "base_bdevs_list": [ 00:35:36.116 { 00:35:36.116 "name": "spare", 00:35:36.116 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:36.116 "is_configured": true, 00:35:36.116 "data_offset": 256, 00:35:36.116 "data_size": 7936 00:35:36.116 }, 00:35:36.116 { 00:35:36.116 "name": "BaseBdev2", 00:35:36.116 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:36.116 "is_configured": true, 00:35:36.116 "data_offset": 256, 00:35:36.116 "data_size": 7936 00:35:36.116 } 00:35:36.116 ] 00:35:36.116 }' 00:35:36.116 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:36.116 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:36.116 12:17:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:36.374 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:36.374 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.374 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:35:36.374 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:35:36.374 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:35:36.632 [2024-07-21 12:17:35.423572] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.632 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:36.889 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:36.889 "name": "raid_bdev1", 00:35:36.889 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:36.889 "strip_size_kb": 0, 00:35:36.889 "state": "online", 00:35:36.889 "raid_level": "raid1", 00:35:36.889 "superblock": true, 00:35:36.889 "num_base_bdevs": 2, 00:35:36.889 "num_base_bdevs_discovered": 1, 00:35:36.889 "num_base_bdevs_operational": 1, 00:35:36.889 "base_bdevs_list": [ 00:35:36.889 { 00:35:36.889 "name": null, 00:35:36.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:36.889 "is_configured": false, 00:35:36.889 "data_offset": 256, 00:35:36.889 "data_size": 7936 00:35:36.889 }, 00:35:36.889 { 00:35:36.889 "name": "BaseBdev2", 00:35:36.889 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:36.889 "is_configured": true, 00:35:36.889 "data_offset": 256, 00:35:36.889 "data_size": 7936 00:35:36.889 } 00:35:36.889 ] 00:35:36.889 }' 00:35:36.889 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:36.889 12:17:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:37.831 12:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:35:37.831 [2024-07-21 12:17:36.527788] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:37.831 [2024-07-21 12:17:36.528063] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:37.831 [2024-07-21 12:17:36.528183] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:37.831 [2024-07-21 12:17:36.528302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:37.831 [2024-07-21 12:17:36.534787] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:35:37.831 [2024-07-21 12:17:36.536881] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:37.831 12:17:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # sleep 1 00:35:38.770 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:38.770 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:38.770 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:38.770 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:38.770 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:38.770 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:38.770 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.027 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:39.027 "name": "raid_bdev1", 00:35:39.027 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:39.027 "strip_size_kb": 0, 00:35:39.027 "state": "online", 00:35:39.027 "raid_level": "raid1", 00:35:39.027 "superblock": true, 00:35:39.027 "num_base_bdevs": 2, 00:35:39.027 "num_base_bdevs_discovered": 2, 00:35:39.027 "num_base_bdevs_operational": 2, 00:35:39.027 "process": { 00:35:39.027 "type": "rebuild", 00:35:39.027 "target": "spare", 00:35:39.027 "progress": { 00:35:39.027 "blocks": 3072, 00:35:39.027 "percent": 38 00:35:39.027 } 00:35:39.027 }, 00:35:39.027 "base_bdevs_list": [ 00:35:39.027 { 00:35:39.027 "name": "spare", 00:35:39.027 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:39.027 "is_configured": true, 00:35:39.027 "data_offset": 256, 00:35:39.027 "data_size": 7936 00:35:39.027 }, 00:35:39.027 { 00:35:39.027 "name": "BaseBdev2", 00:35:39.027 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:39.027 "is_configured": true, 00:35:39.027 "data_offset": 256, 00:35:39.027 "data_size": 7936 00:35:39.027 } 00:35:39.028 ] 00:35:39.028 }' 00:35:39.028 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:39.028 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:39.028 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:39.028 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:39.028 12:17:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:39.285 [2024-07-21 12:17:38.059281] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:39.285 [2024-07-21 12:17:38.146328] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:39.285 [2024-07-21 12:17:38.146554] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:39.285 [2024-07-21 12:17:38.146628] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:39.285 [2024-07-21 12:17:38.146760] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:39.543 "name": "raid_bdev1", 00:35:39.543 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:39.543 "strip_size_kb": 0, 00:35:39.543 "state": "online", 00:35:39.543 "raid_level": "raid1", 00:35:39.543 "superblock": true, 00:35:39.543 "num_base_bdevs": 2, 00:35:39.543 "num_base_bdevs_discovered": 1, 00:35:39.543 "num_base_bdevs_operational": 1, 00:35:39.543 "base_bdevs_list": [ 00:35:39.543 { 00:35:39.543 "name": null, 00:35:39.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:39.543 "is_configured": false, 00:35:39.543 "data_offset": 256, 00:35:39.543 "data_size": 7936 00:35:39.543 }, 00:35:39.543 { 00:35:39.543 "name": "BaseBdev2", 00:35:39.543 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:39.543 "is_configured": true, 00:35:39.543 "data_offset": 256, 00:35:39.543 "data_size": 7936 00:35:39.543 } 00:35:39.543 ] 00:35:39.543 }' 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:39.543 12:17:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:40.478 12:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:35:40.478 [2024-07-21 12:17:39.239265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:40.478 [2024-07-21 12:17:39.239488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:40.478 [2024-07-21 12:17:39.239565] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:40.478 [2024-07-21 12:17:39.239721] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:40.478 [2024-07-21 12:17:39.240278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:40.478 [2024-07-21 12:17:39.240439] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:40.478 [2024-07-21 12:17:39.240635] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:35:40.478 [2024-07-21 12:17:39.240742] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:35:40.478 [2024-07-21 12:17:39.240859] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:35:40.478 [2024-07-21 12:17:39.240966] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:40.478 [2024-07-21 12:17:39.244325] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1e90 00:35:40.478 spare 00:35:40.478 [2024-07-21 12:17:39.246571] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:40.478 12:17:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # sleep 1 00:35:41.413 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:41.413 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:41.413 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:35:41.413 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=spare 00:35:41.413 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:41.413 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:41.413 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.671 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:41.671 "name": "raid_bdev1", 00:35:41.671 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:41.671 "strip_size_kb": 0, 00:35:41.671 "state": "online", 00:35:41.671 "raid_level": "raid1", 00:35:41.671 "superblock": true, 00:35:41.671 "num_base_bdevs": 2, 00:35:41.671 "num_base_bdevs_discovered": 2, 00:35:41.671 "num_base_bdevs_operational": 2, 00:35:41.671 "process": { 00:35:41.671 "type": "rebuild", 00:35:41.671 "target": "spare", 00:35:41.671 "progress": { 00:35:41.671 "blocks": 3072, 00:35:41.671 "percent": 38 00:35:41.671 } 00:35:41.671 }, 00:35:41.671 "base_bdevs_list": [ 00:35:41.671 { 00:35:41.671 "name": "spare", 00:35:41.671 "uuid": "a0276a81-fed4-5e29-9729-ada74cc83484", 00:35:41.671 "is_configured": true, 00:35:41.671 "data_offset": 256, 00:35:41.671 "data_size": 7936 00:35:41.671 }, 00:35:41.671 { 00:35:41.671 "name": "BaseBdev2", 00:35:41.671 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:41.671 "is_configured": true, 00:35:41.671 "data_offset": 256, 00:35:41.671 "data_size": 7936 00:35:41.671 } 00:35:41.671 ] 00:35:41.671 }' 00:35:41.671 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:41.929 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:41.929 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:41.929 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:35:41.929 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:35:41.929 [2024-07-21 12:17:40.768850] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:42.186 [2024-07-21 12:17:40.855969] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:42.186 [2024-07-21 12:17:40.856190] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:42.186 [2024-07-21 12:17:40.856247] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:42.186 [2024-07-21 12:17:40.856386] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:42.186 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:42.186 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:42.186 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:42.186 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:42.186 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:42.186 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:42.187 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:42.187 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:42.187 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:42.187 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:42.187 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:42.187 12:17:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:42.444 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:42.444 "name": "raid_bdev1", 00:35:42.444 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:42.444 "strip_size_kb": 0, 00:35:42.444 "state": "online", 00:35:42.444 "raid_level": "raid1", 00:35:42.444 "superblock": true, 00:35:42.444 "num_base_bdevs": 2, 00:35:42.444 "num_base_bdevs_discovered": 1, 00:35:42.444 "num_base_bdevs_operational": 1, 00:35:42.444 "base_bdevs_list": [ 00:35:42.444 { 00:35:42.444 "name": null, 00:35:42.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:42.444 "is_configured": false, 00:35:42.444 "data_offset": 256, 00:35:42.444 "data_size": 7936 00:35:42.444 }, 00:35:42.444 { 00:35:42.444 "name": "BaseBdev2", 00:35:42.444 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:42.444 "is_configured": true, 00:35:42.444 "data_offset": 256, 00:35:42.444 "data_size": 7936 00:35:42.444 } 00:35:42.444 ] 00:35:42.444 }' 00:35:42.444 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:42.444 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:43.009 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:43.009 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:43.009 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:43.009 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:43.009 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:43.010 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:43.010 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.268 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:43.268 "name": "raid_bdev1", 00:35:43.268 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:43.268 "strip_size_kb": 0, 00:35:43.268 "state": "online", 00:35:43.268 "raid_level": "raid1", 00:35:43.268 "superblock": true, 00:35:43.268 "num_base_bdevs": 2, 00:35:43.268 "num_base_bdevs_discovered": 1, 00:35:43.268 "num_base_bdevs_operational": 1, 00:35:43.268 "base_bdevs_list": [ 00:35:43.268 { 00:35:43.268 "name": null, 00:35:43.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:43.268 "is_configured": false, 00:35:43.268 "data_offset": 256, 00:35:43.268 "data_size": 7936 00:35:43.268 }, 00:35:43.268 { 00:35:43.268 "name": "BaseBdev2", 00:35:43.268 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:43.268 "is_configured": true, 00:35:43.268 "data_offset": 256, 00:35:43.268 "data_size": 7936 00:35:43.268 } 00:35:43.268 ] 00:35:43.268 }' 00:35:43.268 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:43.268 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:43.268 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:43.268 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:43.268 12:17:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:35:43.526 12:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:43.785 [2024-07-21 12:17:42.485428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:43.785 [2024-07-21 12:17:42.485706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:43.785 [2024-07-21 12:17:42.485883] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:43.785 [2024-07-21 12:17:42.486017] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:43.785 [2024-07-21 12:17:42.486532] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:43.785 [2024-07-21 12:17:42.486702] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:43.785 [2024-07-21 12:17:42.486878] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:35:43.785 [2024-07-21 12:17:42.486981] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:43.785 [2024-07-21 12:17:42.487072] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:43.785 BaseBdev1 00:35:43.785 12:17:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # sleep 1 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:44.717 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.975 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:44.975 "name": "raid_bdev1", 00:35:44.975 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:44.975 "strip_size_kb": 0, 00:35:44.975 "state": "online", 00:35:44.975 "raid_level": "raid1", 00:35:44.975 "superblock": true, 00:35:44.975 "num_base_bdevs": 2, 00:35:44.975 "num_base_bdevs_discovered": 1, 00:35:44.975 "num_base_bdevs_operational": 1, 00:35:44.975 "base_bdevs_list": [ 00:35:44.975 { 00:35:44.975 "name": null, 00:35:44.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:44.975 "is_configured": false, 00:35:44.975 "data_offset": 256, 00:35:44.975 "data_size": 7936 00:35:44.975 }, 00:35:44.975 { 00:35:44.975 "name": "BaseBdev2", 00:35:44.975 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:44.975 "is_configured": true, 00:35:44.975 "data_offset": 256, 00:35:44.975 "data_size": 7936 00:35:44.975 } 00:35:44.975 ] 00:35:44.975 }' 00:35:44.975 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:44.975 12:17:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:45.552 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:45.552 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:45.552 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:45.552 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:45.552 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:45.552 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:45.552 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:45.810 "name": "raid_bdev1", 00:35:45.810 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:45.810 "strip_size_kb": 0, 00:35:45.810 "state": "online", 00:35:45.810 "raid_level": "raid1", 00:35:45.810 "superblock": true, 00:35:45.810 "num_base_bdevs": 2, 00:35:45.810 "num_base_bdevs_discovered": 1, 00:35:45.810 "num_base_bdevs_operational": 1, 00:35:45.810 "base_bdevs_list": [ 00:35:45.810 { 00:35:45.810 "name": null, 00:35:45.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:45.810 "is_configured": false, 00:35:45.810 "data_offset": 256, 00:35:45.810 "data_size": 7936 00:35:45.810 }, 00:35:45.810 { 00:35:45.810 "name": "BaseBdev2", 00:35:45.810 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:45.810 "is_configured": true, 00:35:45.810 "data_offset": 256, 00:35:45.810 "data_size": 7936 00:35:45.810 } 00:35:45.810 ] 00:35:45.810 }' 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@648 -- # local es=0 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:45.810 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:35:46.069 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:35:46.069 [2024-07-21 12:17:44.930692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:46.069 [2024-07-21 12:17:44.931076] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:35:46.069 [2024-07-21 12:17:44.931204] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:35:46.069 request: 00:35:46.069 { 00:35:46.069 "raid_bdev": "raid_bdev1", 00:35:46.069 "base_bdev": "BaseBdev1", 00:35:46.069 "method": "bdev_raid_add_base_bdev", 00:35:46.069 "req_id": 1 00:35:46.069 } 00:35:46.069 Got JSON-RPC error response 00:35:46.069 response: 00:35:46.069 { 00:35:46.069 "code": -22, 00:35:46.069 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:35:46.069 } 00:35:46.327 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@651 -- # es=1 00:35:46.327 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:35:46.327 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:35:46.327 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:35:46.327 12:17:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # sleep 1 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:47.262 12:17:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:47.520 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:47.520 "name": "raid_bdev1", 00:35:47.520 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:47.520 "strip_size_kb": 0, 00:35:47.520 "state": "online", 00:35:47.520 "raid_level": "raid1", 00:35:47.520 "superblock": true, 00:35:47.520 "num_base_bdevs": 2, 00:35:47.520 "num_base_bdevs_discovered": 1, 00:35:47.520 "num_base_bdevs_operational": 1, 00:35:47.520 "base_bdevs_list": [ 00:35:47.520 { 00:35:47.520 "name": null, 00:35:47.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:47.520 "is_configured": false, 00:35:47.520 "data_offset": 256, 00:35:47.520 "data_size": 7936 00:35:47.520 }, 00:35:47.520 { 00:35:47.520 "name": "BaseBdev2", 00:35:47.520 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:47.520 "is_configured": true, 00:35:47.520 "data_offset": 256, 00:35:47.520 "data_size": 7936 00:35:47.520 } 00:35:47.520 ] 00:35:47.520 }' 00:35:47.520 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:47.520 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:48.086 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:48.086 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:35:48.086 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:35:48.086 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local target=none 00:35:48.086 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:35:48.086 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:48.086 12:17:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:48.345 "name": "raid_bdev1", 00:35:48.345 "uuid": "ca8d7fbe-be45-4b27-b411-0357b15a4625", 00:35:48.345 "strip_size_kb": 0, 00:35:48.345 "state": "online", 00:35:48.345 "raid_level": "raid1", 00:35:48.345 "superblock": true, 00:35:48.345 "num_base_bdevs": 2, 00:35:48.345 "num_base_bdevs_discovered": 1, 00:35:48.345 "num_base_bdevs_operational": 1, 00:35:48.345 "base_bdevs_list": [ 00:35:48.345 { 00:35:48.345 "name": null, 00:35:48.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:48.345 "is_configured": false, 00:35:48.345 "data_offset": 256, 00:35:48.345 "data_size": 7936 00:35:48.345 }, 00:35:48.345 { 00:35:48.345 "name": "BaseBdev2", 00:35:48.345 "uuid": "681e0344-3591-5c1f-b8ec-613a589ee3f7", 00:35:48.345 "is_configured": true, 00:35:48.345 "data_offset": 256, 00:35:48.345 "data_size": 7936 00:35:48.345 } 00:35:48.345 ] 00:35:48.345 }' 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@782 -- # killprocess 169791 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@946 -- # '[' -z 169791 ']' 00:35:48.345 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # kill -0 169791 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # uname 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 169791 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # echo 'killing process with pid 169791' 00:35:48.603 killing process with pid 169791 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@965 -- # kill 169791 00:35:48.603 Received shutdown signal, test time was about 60.000000 seconds 00:35:48.603 00:35:48.603 Latency(us) 00:35:48.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.603 =================================================================================================================== 00:35:48.603 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:48.603 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # wait 169791 00:35:48.603 [2024-07-21 12:17:47.237455] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:48.604 [2024-07-21 12:17:47.237631] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:48.604 [2024-07-21 12:17:47.237692] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:48.604 [2024-07-21 12:17:47.237849] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:35:48.604 [2024-07-21 12:17:47.264942] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:48.862 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # return 0 00:35:48.862 00:35:48.862 real 0m30.848s 00:35:48.862 user 0m49.791s 00:35:48.862 sys 0m3.248s 00:35:48.862 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:48.862 12:17:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:35:48.862 ************************************ 00:35:48.862 END TEST raid_rebuild_test_sb_4k 00:35:48.862 ************************************ 00:35:48.862 12:17:47 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:35:48.862 12:17:47 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:35:48.862 12:17:47 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:35:48.862 12:17:47 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:48.862 12:17:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:48.862 ************************************ 00:35:48.862 START TEST raid_state_function_test_sb_md_separate 00:35:48.862 ************************************ 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=170653 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 170653' 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:35:48.862 Process raid pid: 170653 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 170653 /var/tmp/spdk-raid.sock 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 170653 ']' 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:48.862 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:48.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:48.863 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:48.863 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:48.863 [2024-07-21 12:17:47.616151] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:48.863 [2024-07-21 12:17:47.616537] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:49.121 [2024-07-21 12:17:47.767574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.121 [2024-07-21 12:17:47.825842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.121 [2024-07-21 12:17:47.878297] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:49.121 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:49.121 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:35:49.121 12:17:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:49.379 [2024-07-21 12:17:48.134391] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:49.379 [2024-07-21 12:17:48.134609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:49.379 [2024-07-21 12:17:48.134720] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:49.379 [2024-07-21 12:17:48.134863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:49.379 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:49.638 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:49.638 "name": "Existed_Raid", 00:35:49.638 "uuid": "1b194f30-fb11-4bcc-9050-b6e2b9fe144c", 00:35:49.638 "strip_size_kb": 0, 00:35:49.638 "state": "configuring", 00:35:49.638 "raid_level": "raid1", 00:35:49.638 "superblock": true, 00:35:49.638 "num_base_bdevs": 2, 00:35:49.638 "num_base_bdevs_discovered": 0, 00:35:49.638 "num_base_bdevs_operational": 2, 00:35:49.638 "base_bdevs_list": [ 00:35:49.638 { 00:35:49.638 "name": "BaseBdev1", 00:35:49.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.638 "is_configured": false, 00:35:49.638 "data_offset": 0, 00:35:49.638 "data_size": 0 00:35:49.638 }, 00:35:49.638 { 00:35:49.638 "name": "BaseBdev2", 00:35:49.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:49.638 "is_configured": false, 00:35:49.638 "data_offset": 0, 00:35:49.638 "data_size": 0 00:35:49.638 } 00:35:49.638 ] 00:35:49.638 }' 00:35:49.638 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:49.638 12:17:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:50.205 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:50.463 [2024-07-21 12:17:49.182399] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:50.463 [2024-07-21 12:17:49.182565] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:35:50.463 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:50.722 [2024-07-21 12:17:49.450478] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:50.722 [2024-07-21 12:17:49.450732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:50.722 [2024-07-21 12:17:49.450894] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:50.722 [2024-07-21 12:17:49.450971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:50.722 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:35:50.981 [2024-07-21 12:17:49.657942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:50.981 BaseBdev1 00:35:50.981 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:35:50.981 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:35:50.981 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:50.981 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:35:50.981 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:50.981 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:50.981 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:51.239 12:17:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:51.497 [ 00:35:51.497 { 00:35:51.497 "name": "BaseBdev1", 00:35:51.497 "aliases": [ 00:35:51.497 "dc095f30-9be7-40ee-985e-2ea4555ce7a6" 00:35:51.497 ], 00:35:51.497 "product_name": "Malloc disk", 00:35:51.497 "block_size": 4096, 00:35:51.497 "num_blocks": 8192, 00:35:51.497 "uuid": "dc095f30-9be7-40ee-985e-2ea4555ce7a6", 00:35:51.497 "md_size": 32, 00:35:51.497 "md_interleave": false, 00:35:51.497 "dif_type": 0, 00:35:51.497 "assigned_rate_limits": { 00:35:51.497 "rw_ios_per_sec": 0, 00:35:51.497 "rw_mbytes_per_sec": 0, 00:35:51.497 "r_mbytes_per_sec": 0, 00:35:51.497 "w_mbytes_per_sec": 0 00:35:51.497 }, 00:35:51.497 "claimed": true, 00:35:51.497 "claim_type": "exclusive_write", 00:35:51.497 "zoned": false, 00:35:51.497 "supported_io_types": { 00:35:51.497 "read": true, 00:35:51.497 "write": true, 00:35:51.497 "unmap": true, 00:35:51.497 "write_zeroes": true, 00:35:51.497 "flush": true, 00:35:51.497 "reset": true, 00:35:51.497 "compare": false, 00:35:51.497 "compare_and_write": false, 00:35:51.497 "abort": true, 00:35:51.497 "nvme_admin": false, 00:35:51.497 "nvme_io": false 00:35:51.497 }, 00:35:51.497 "memory_domains": [ 00:35:51.497 { 00:35:51.497 "dma_device_id": "system", 00:35:51.497 "dma_device_type": 1 00:35:51.497 }, 00:35:51.497 { 00:35:51.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:51.497 "dma_device_type": 2 00:35:51.497 } 00:35:51.497 ], 00:35:51.497 "driver_specific": {} 00:35:51.497 } 00:35:51.497 ] 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:51.497 "name": "Existed_Raid", 00:35:51.497 "uuid": "0c4a5727-0635-4306-a4c0-564fd9c56ab3", 00:35:51.497 "strip_size_kb": 0, 00:35:51.497 "state": "configuring", 00:35:51.497 "raid_level": "raid1", 00:35:51.497 "superblock": true, 00:35:51.497 "num_base_bdevs": 2, 00:35:51.497 "num_base_bdevs_discovered": 1, 00:35:51.497 "num_base_bdevs_operational": 2, 00:35:51.497 "base_bdevs_list": [ 00:35:51.497 { 00:35:51.497 "name": "BaseBdev1", 00:35:51.497 "uuid": "dc095f30-9be7-40ee-985e-2ea4555ce7a6", 00:35:51.497 "is_configured": true, 00:35:51.497 "data_offset": 256, 00:35:51.497 "data_size": 7936 00:35:51.497 }, 00:35:51.497 { 00:35:51.497 "name": "BaseBdev2", 00:35:51.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.497 "is_configured": false, 00:35:51.497 "data_offset": 0, 00:35:51.497 "data_size": 0 00:35:51.497 } 00:35:51.497 ] 00:35:51.497 }' 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:51.497 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:52.063 12:17:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:35:52.320 [2024-07-21 12:17:51.106216] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:52.320 [2024-07-21 12:17:51.106386] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:35:52.320 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:35:52.577 [2024-07-21 12:17:51.378330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:52.577 [2024-07-21 12:17:51.380385] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:52.577 [2024-07-21 12:17:51.380574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:52.577 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:52.834 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:52.834 "name": "Existed_Raid", 00:35:52.834 "uuid": "9a38107c-3158-4ccf-a320-c249c82c09ce", 00:35:52.834 "strip_size_kb": 0, 00:35:52.834 "state": "configuring", 00:35:52.834 "raid_level": "raid1", 00:35:52.834 "superblock": true, 00:35:52.834 "num_base_bdevs": 2, 00:35:52.834 "num_base_bdevs_discovered": 1, 00:35:52.834 "num_base_bdevs_operational": 2, 00:35:52.834 "base_bdevs_list": [ 00:35:52.834 { 00:35:52.834 "name": "BaseBdev1", 00:35:52.834 "uuid": "dc095f30-9be7-40ee-985e-2ea4555ce7a6", 00:35:52.834 "is_configured": true, 00:35:52.834 "data_offset": 256, 00:35:52.834 "data_size": 7936 00:35:52.834 }, 00:35:52.834 { 00:35:52.834 "name": "BaseBdev2", 00:35:52.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:52.834 "is_configured": false, 00:35:52.834 "data_offset": 0, 00:35:52.834 "data_size": 0 00:35:52.834 } 00:35:52.834 ] 00:35:52.834 }' 00:35:52.834 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:52.834 12:17:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:53.398 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:35:53.656 [2024-07-21 12:17:52.425524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:53.656 [2024-07-21 12:17:52.426174] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:35:53.656 [2024-07-21 12:17:52.426438] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:35:53.656 BaseBdev2 00:35:53.656 [2024-07-21 12:17:52.426920] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:35:53.656 [2024-07-21 12:17:52.427404] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:35:53.656 [2024-07-21 12:17:52.427464] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:35:53.656 [2024-07-21 12:17:52.427729] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:53.656 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:35:53.656 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:35:53.656 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:35:53.656 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local i 00:35:53.656 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:35:53.656 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:35:53.656 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:35:53.914 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:54.171 [ 00:35:54.171 { 00:35:54.171 "name": "BaseBdev2", 00:35:54.171 "aliases": [ 00:35:54.171 "a610221c-099e-422d-b5c2-f283c9e7edee" 00:35:54.171 ], 00:35:54.171 "product_name": "Malloc disk", 00:35:54.171 "block_size": 4096, 00:35:54.171 "num_blocks": 8192, 00:35:54.171 "uuid": "a610221c-099e-422d-b5c2-f283c9e7edee", 00:35:54.171 "md_size": 32, 00:35:54.171 "md_interleave": false, 00:35:54.171 "dif_type": 0, 00:35:54.171 "assigned_rate_limits": { 00:35:54.171 "rw_ios_per_sec": 0, 00:35:54.171 "rw_mbytes_per_sec": 0, 00:35:54.171 "r_mbytes_per_sec": 0, 00:35:54.171 "w_mbytes_per_sec": 0 00:35:54.171 }, 00:35:54.171 "claimed": true, 00:35:54.171 "claim_type": "exclusive_write", 00:35:54.171 "zoned": false, 00:35:54.171 "supported_io_types": { 00:35:54.171 "read": true, 00:35:54.171 "write": true, 00:35:54.171 "unmap": true, 00:35:54.171 "write_zeroes": true, 00:35:54.171 "flush": true, 00:35:54.171 "reset": true, 00:35:54.171 "compare": false, 00:35:54.171 "compare_and_write": false, 00:35:54.171 "abort": true, 00:35:54.171 "nvme_admin": false, 00:35:54.171 "nvme_io": false 00:35:54.171 }, 00:35:54.171 "memory_domains": [ 00:35:54.171 { 00:35:54.171 "dma_device_id": "system", 00:35:54.171 "dma_device_type": 1 00:35:54.171 }, 00:35:54.171 { 00:35:54.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:54.171 "dma_device_type": 2 00:35:54.171 } 00:35:54.171 ], 00:35:54.171 "driver_specific": {} 00:35:54.171 } 00:35:54.171 ] 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # return 0 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:54.171 12:17:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:54.429 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:54.429 "name": "Existed_Raid", 00:35:54.429 "uuid": "9a38107c-3158-4ccf-a320-c249c82c09ce", 00:35:54.429 "strip_size_kb": 0, 00:35:54.429 "state": "online", 00:35:54.429 "raid_level": "raid1", 00:35:54.429 "superblock": true, 00:35:54.429 "num_base_bdevs": 2, 00:35:54.429 "num_base_bdevs_discovered": 2, 00:35:54.429 "num_base_bdevs_operational": 2, 00:35:54.429 "base_bdevs_list": [ 00:35:54.429 { 00:35:54.429 "name": "BaseBdev1", 00:35:54.429 "uuid": "dc095f30-9be7-40ee-985e-2ea4555ce7a6", 00:35:54.429 "is_configured": true, 00:35:54.429 "data_offset": 256, 00:35:54.429 "data_size": 7936 00:35:54.429 }, 00:35:54.429 { 00:35:54.429 "name": "BaseBdev2", 00:35:54.429 "uuid": "a610221c-099e-422d-b5c2-f283c9e7edee", 00:35:54.429 "is_configured": true, 00:35:54.429 "data_offset": 256, 00:35:54.429 "data_size": 7936 00:35:54.429 } 00:35:54.429 ] 00:35:54.429 }' 00:35:54.429 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:54.429 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:35:54.995 12:17:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:35:55.254 [2024-07-21 12:17:54.002073] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:55.254 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:35:55.254 "name": "Existed_Raid", 00:35:55.254 "aliases": [ 00:35:55.254 "9a38107c-3158-4ccf-a320-c249c82c09ce" 00:35:55.254 ], 00:35:55.254 "product_name": "Raid Volume", 00:35:55.254 "block_size": 4096, 00:35:55.254 "num_blocks": 7936, 00:35:55.254 "uuid": "9a38107c-3158-4ccf-a320-c249c82c09ce", 00:35:55.254 "md_size": 32, 00:35:55.254 "md_interleave": false, 00:35:55.254 "dif_type": 0, 00:35:55.254 "assigned_rate_limits": { 00:35:55.254 "rw_ios_per_sec": 0, 00:35:55.254 "rw_mbytes_per_sec": 0, 00:35:55.254 "r_mbytes_per_sec": 0, 00:35:55.254 "w_mbytes_per_sec": 0 00:35:55.254 }, 00:35:55.254 "claimed": false, 00:35:55.254 "zoned": false, 00:35:55.254 "supported_io_types": { 00:35:55.254 "read": true, 00:35:55.254 "write": true, 00:35:55.254 "unmap": false, 00:35:55.254 "write_zeroes": true, 00:35:55.254 "flush": false, 00:35:55.254 "reset": true, 00:35:55.254 "compare": false, 00:35:55.254 "compare_and_write": false, 00:35:55.254 "abort": false, 00:35:55.254 "nvme_admin": false, 00:35:55.254 "nvme_io": false 00:35:55.254 }, 00:35:55.254 "memory_domains": [ 00:35:55.254 { 00:35:55.254 "dma_device_id": "system", 00:35:55.254 "dma_device_type": 1 00:35:55.254 }, 00:35:55.254 { 00:35:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.254 "dma_device_type": 2 00:35:55.254 }, 00:35:55.254 { 00:35:55.254 "dma_device_id": "system", 00:35:55.254 "dma_device_type": 1 00:35:55.254 }, 00:35:55.254 { 00:35:55.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.254 "dma_device_type": 2 00:35:55.254 } 00:35:55.254 ], 00:35:55.254 "driver_specific": { 00:35:55.254 "raid": { 00:35:55.254 "uuid": "9a38107c-3158-4ccf-a320-c249c82c09ce", 00:35:55.254 "strip_size_kb": 0, 00:35:55.254 "state": "online", 00:35:55.254 "raid_level": "raid1", 00:35:55.254 "superblock": true, 00:35:55.254 "num_base_bdevs": 2, 00:35:55.254 "num_base_bdevs_discovered": 2, 00:35:55.254 "num_base_bdevs_operational": 2, 00:35:55.254 "base_bdevs_list": [ 00:35:55.254 { 00:35:55.254 "name": "BaseBdev1", 00:35:55.254 "uuid": "dc095f30-9be7-40ee-985e-2ea4555ce7a6", 00:35:55.254 "is_configured": true, 00:35:55.254 "data_offset": 256, 00:35:55.254 "data_size": 7936 00:35:55.254 }, 00:35:55.254 { 00:35:55.254 "name": "BaseBdev2", 00:35:55.254 "uuid": "a610221c-099e-422d-b5c2-f283c9e7edee", 00:35:55.254 "is_configured": true, 00:35:55.254 "data_offset": 256, 00:35:55.254 "data_size": 7936 00:35:55.254 } 00:35:55.254 ] 00:35:55.254 } 00:35:55.254 } 00:35:55.254 }' 00:35:55.254 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:55.254 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:35:55.254 BaseBdev2' 00:35:55.254 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:55.254 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:35:55.254 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:55.512 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:55.512 "name": "BaseBdev1", 00:35:55.512 "aliases": [ 00:35:55.512 "dc095f30-9be7-40ee-985e-2ea4555ce7a6" 00:35:55.512 ], 00:35:55.512 "product_name": "Malloc disk", 00:35:55.512 "block_size": 4096, 00:35:55.512 "num_blocks": 8192, 00:35:55.512 "uuid": "dc095f30-9be7-40ee-985e-2ea4555ce7a6", 00:35:55.512 "md_size": 32, 00:35:55.512 "md_interleave": false, 00:35:55.512 "dif_type": 0, 00:35:55.512 "assigned_rate_limits": { 00:35:55.512 "rw_ios_per_sec": 0, 00:35:55.512 "rw_mbytes_per_sec": 0, 00:35:55.512 "r_mbytes_per_sec": 0, 00:35:55.512 "w_mbytes_per_sec": 0 00:35:55.512 }, 00:35:55.512 "claimed": true, 00:35:55.512 "claim_type": "exclusive_write", 00:35:55.512 "zoned": false, 00:35:55.512 "supported_io_types": { 00:35:55.512 "read": true, 00:35:55.512 "write": true, 00:35:55.512 "unmap": true, 00:35:55.512 "write_zeroes": true, 00:35:55.512 "flush": true, 00:35:55.512 "reset": true, 00:35:55.512 "compare": false, 00:35:55.512 "compare_and_write": false, 00:35:55.512 "abort": true, 00:35:55.512 "nvme_admin": false, 00:35:55.512 "nvme_io": false 00:35:55.512 }, 00:35:55.512 "memory_domains": [ 00:35:55.512 { 00:35:55.512 "dma_device_id": "system", 00:35:55.512 "dma_device_type": 1 00:35:55.512 }, 00:35:55.513 { 00:35:55.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:55.513 "dma_device_type": 2 00:35:55.513 } 00:35:55.513 ], 00:35:55.513 "driver_specific": {} 00:35:55.513 }' 00:35:55.513 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:55.771 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:55.771 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:55.771 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:55.771 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:55.771 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:55.771 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:55.771 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:56.046 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:56.046 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:56.046 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:56.046 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:56.046 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:35:56.046 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:35:56.046 12:17:54 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:35:56.312 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:35:56.312 "name": "BaseBdev2", 00:35:56.312 "aliases": [ 00:35:56.312 "a610221c-099e-422d-b5c2-f283c9e7edee" 00:35:56.312 ], 00:35:56.312 "product_name": "Malloc disk", 00:35:56.312 "block_size": 4096, 00:35:56.312 "num_blocks": 8192, 00:35:56.312 "uuid": "a610221c-099e-422d-b5c2-f283c9e7edee", 00:35:56.312 "md_size": 32, 00:35:56.312 "md_interleave": false, 00:35:56.312 "dif_type": 0, 00:35:56.312 "assigned_rate_limits": { 00:35:56.312 "rw_ios_per_sec": 0, 00:35:56.312 "rw_mbytes_per_sec": 0, 00:35:56.312 "r_mbytes_per_sec": 0, 00:35:56.312 "w_mbytes_per_sec": 0 00:35:56.312 }, 00:35:56.312 "claimed": true, 00:35:56.312 "claim_type": "exclusive_write", 00:35:56.312 "zoned": false, 00:35:56.312 "supported_io_types": { 00:35:56.312 "read": true, 00:35:56.312 "write": true, 00:35:56.312 "unmap": true, 00:35:56.312 "write_zeroes": true, 00:35:56.312 "flush": true, 00:35:56.312 "reset": true, 00:35:56.312 "compare": false, 00:35:56.312 "compare_and_write": false, 00:35:56.312 "abort": true, 00:35:56.312 "nvme_admin": false, 00:35:56.312 "nvme_io": false 00:35:56.312 }, 00:35:56.312 "memory_domains": [ 00:35:56.312 { 00:35:56.312 "dma_device_id": "system", 00:35:56.312 "dma_device_type": 1 00:35:56.312 }, 00:35:56.312 { 00:35:56.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:56.312 "dma_device_type": 2 00:35:56.312 } 00:35:56.312 ], 00:35:56.312 "driver_specific": {} 00:35:56.312 }' 00:35:56.312 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:56.312 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:35:56.312 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:35:56.312 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:56.569 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:35:56.569 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:35:56.569 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:56.569 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:35:56.569 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:35:56.569 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:56.569 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:35:56.826 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:35:56.826 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:35:57.084 [2024-07-21 12:17:55.738206] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.084 12:17:55 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:57.341 12:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:35:57.341 "name": "Existed_Raid", 00:35:57.342 "uuid": "9a38107c-3158-4ccf-a320-c249c82c09ce", 00:35:57.342 "strip_size_kb": 0, 00:35:57.342 "state": "online", 00:35:57.342 "raid_level": "raid1", 00:35:57.342 "superblock": true, 00:35:57.342 "num_base_bdevs": 2, 00:35:57.342 "num_base_bdevs_discovered": 1, 00:35:57.342 "num_base_bdevs_operational": 1, 00:35:57.342 "base_bdevs_list": [ 00:35:57.342 { 00:35:57.342 "name": null, 00:35:57.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:57.342 "is_configured": false, 00:35:57.342 "data_offset": 256, 00:35:57.342 "data_size": 7936 00:35:57.342 }, 00:35:57.342 { 00:35:57.342 "name": "BaseBdev2", 00:35:57.342 "uuid": "a610221c-099e-422d-b5c2-f283c9e7edee", 00:35:57.342 "is_configured": true, 00:35:57.342 "data_offset": 256, 00:35:57.342 "data_size": 7936 00:35:57.342 } 00:35:57.342 ] 00:35:57.342 }' 00:35:57.342 12:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:35:57.342 12:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:57.909 12:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:35:57.909 12:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:57.909 12:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:57.909 12:17:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:35:58.167 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:35:58.167 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:58.167 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:35:58.428 [2024-07-21 12:17:57.185349] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:58.428 [2024-07-21 12:17:57.185606] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:58.428 [2024-07-21 12:17:57.196608] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:58.428 [2024-07-21 12:17:57.196828] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:58.428 [2024-07-21 12:17:57.196936] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:35:58.428 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:35:58.428 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:35:58.428 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:35:58.428 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 170653 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 170653 ']' 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 170653 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 170653 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 170653' 00:35:58.687 killing process with pid 170653 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 170653 00:35:58.687 [2024-07-21 12:17:57.426739] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:58.687 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 170653 00:35:58.687 [2024-07-21 12:17:57.427057] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:58.945 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:35:58.945 00:35:58.945 real 0m10.089s 00:35:58.945 user 0m19.028s 00:35:58.945 sys 0m1.236s 00:35:58.945 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:35:58.945 12:17:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:58.945 ************************************ 00:35:58.945 END TEST raid_state_function_test_sb_md_separate 00:35:58.945 ************************************ 00:35:58.945 12:17:57 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:35:58.945 12:17:57 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:35:58.945 12:17:57 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:35:58.945 12:17:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:58.945 ************************************ 00:35:58.945 START TEST raid_superblock_test_md_separate 00:35:58.945 ************************************ 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=171010 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 171010 /var/tmp/spdk-raid.sock 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@827 -- # '[' -z 171010 ']' 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:35:58.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:35:58.945 12:17:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:35:58.945 [2024-07-21 12:17:57.764655] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:35:58.945 [2024-07-21 12:17:57.765016] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171010 ] 00:35:59.202 [2024-07-21 12:17:57.919151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.202 [2024-07-21 12:17:57.971861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.202 [2024-07-21 12:17:58.024095] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # return 0 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:35:59.461 malloc1 00:35:59.461 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:59.719 [2024-07-21 12:17:58.451536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:59.719 [2024-07-21 12:17:58.451855] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:59.719 [2024-07-21 12:17:58.451938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:35:59.719 [2024-07-21 12:17:58.452215] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:59.719 [2024-07-21 12:17:58.454495] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:59.719 [2024-07-21 12:17:58.454701] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:59.719 pt1 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:59.719 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:35:59.978 malloc2 00:35:59.978 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:00.235 [2024-07-21 12:17:58.862589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:00.235 [2024-07-21 12:17:58.862839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:00.235 [2024-07-21 12:17:58.862952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:00.235 [2024-07-21 12:17:58.863088] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:00.235 [2024-07-21 12:17:58.865111] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:00.235 [2024-07-21 12:17:58.865312] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:00.235 pt2 00:36:00.235 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:00.235 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:00.235 12:17:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:36:00.235 [2024-07-21 12:17:59.078690] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:00.235 [2024-07-21 12:17:59.080853] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:00.235 [2024-07-21 12:17:59.081230] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:36:00.235 [2024-07-21 12:17:59.081366] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:00.235 [2024-07-21 12:17:59.081559] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:36:00.235 [2024-07-21 12:17:59.081801] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:36:00.235 [2024-07-21 12:17:59.081899] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:36:00.235 [2024-07-21 12:17:59.082093] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:00.235 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.493 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:00.493 "name": "raid_bdev1", 00:36:00.493 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:00.493 "strip_size_kb": 0, 00:36:00.493 "state": "online", 00:36:00.493 "raid_level": "raid1", 00:36:00.493 "superblock": true, 00:36:00.493 "num_base_bdevs": 2, 00:36:00.493 "num_base_bdevs_discovered": 2, 00:36:00.493 "num_base_bdevs_operational": 2, 00:36:00.493 "base_bdevs_list": [ 00:36:00.493 { 00:36:00.493 "name": "pt1", 00:36:00.494 "uuid": "8429b143-766a-5de1-87cc-754fc4b39b4f", 00:36:00.494 "is_configured": true, 00:36:00.494 "data_offset": 256, 00:36:00.494 "data_size": 7936 00:36:00.494 }, 00:36:00.494 { 00:36:00.494 "name": "pt2", 00:36:00.494 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:00.494 "is_configured": true, 00:36:00.494 "data_offset": 256, 00:36:00.494 "data_size": 7936 00:36:00.494 } 00:36:00.494 ] 00:36:00.494 }' 00:36:00.494 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:00.494 12:17:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:01.059 12:17:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:01.318 [2024-07-21 12:18:00.071044] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:01.318 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:01.318 "name": "raid_bdev1", 00:36:01.318 "aliases": [ 00:36:01.318 "1ef4539f-14c0-4cbb-8805-4025f6e97994" 00:36:01.318 ], 00:36:01.318 "product_name": "Raid Volume", 00:36:01.318 "block_size": 4096, 00:36:01.318 "num_blocks": 7936, 00:36:01.318 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:01.318 "md_size": 32, 00:36:01.318 "md_interleave": false, 00:36:01.318 "dif_type": 0, 00:36:01.318 "assigned_rate_limits": { 00:36:01.318 "rw_ios_per_sec": 0, 00:36:01.318 "rw_mbytes_per_sec": 0, 00:36:01.318 "r_mbytes_per_sec": 0, 00:36:01.318 "w_mbytes_per_sec": 0 00:36:01.318 }, 00:36:01.318 "claimed": false, 00:36:01.318 "zoned": false, 00:36:01.318 "supported_io_types": { 00:36:01.318 "read": true, 00:36:01.318 "write": true, 00:36:01.318 "unmap": false, 00:36:01.318 "write_zeroes": true, 00:36:01.318 "flush": false, 00:36:01.318 "reset": true, 00:36:01.318 "compare": false, 00:36:01.318 "compare_and_write": false, 00:36:01.318 "abort": false, 00:36:01.318 "nvme_admin": false, 00:36:01.318 "nvme_io": false 00:36:01.318 }, 00:36:01.318 "memory_domains": [ 00:36:01.318 { 00:36:01.318 "dma_device_id": "system", 00:36:01.318 "dma_device_type": 1 00:36:01.318 }, 00:36:01.318 { 00:36:01.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:01.318 "dma_device_type": 2 00:36:01.318 }, 00:36:01.318 { 00:36:01.318 "dma_device_id": "system", 00:36:01.318 "dma_device_type": 1 00:36:01.318 }, 00:36:01.318 { 00:36:01.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:01.318 "dma_device_type": 2 00:36:01.318 } 00:36:01.318 ], 00:36:01.318 "driver_specific": { 00:36:01.318 "raid": { 00:36:01.318 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:01.318 "strip_size_kb": 0, 00:36:01.318 "state": "online", 00:36:01.318 "raid_level": "raid1", 00:36:01.318 "superblock": true, 00:36:01.318 "num_base_bdevs": 2, 00:36:01.318 "num_base_bdevs_discovered": 2, 00:36:01.318 "num_base_bdevs_operational": 2, 00:36:01.318 "base_bdevs_list": [ 00:36:01.318 { 00:36:01.318 "name": "pt1", 00:36:01.318 "uuid": "8429b143-766a-5de1-87cc-754fc4b39b4f", 00:36:01.318 "is_configured": true, 00:36:01.318 "data_offset": 256, 00:36:01.318 "data_size": 7936 00:36:01.318 }, 00:36:01.318 { 00:36:01.318 "name": "pt2", 00:36:01.318 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:01.318 "is_configured": true, 00:36:01.318 "data_offset": 256, 00:36:01.318 "data_size": 7936 00:36:01.318 } 00:36:01.318 ] 00:36:01.318 } 00:36:01.318 } 00:36:01.318 }' 00:36:01.318 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:01.318 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:01.318 pt2' 00:36:01.318 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:01.318 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:01.318 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:01.576 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:01.576 "name": "pt1", 00:36:01.576 "aliases": [ 00:36:01.576 "8429b143-766a-5de1-87cc-754fc4b39b4f" 00:36:01.576 ], 00:36:01.576 "product_name": "passthru", 00:36:01.576 "block_size": 4096, 00:36:01.576 "num_blocks": 8192, 00:36:01.576 "uuid": "8429b143-766a-5de1-87cc-754fc4b39b4f", 00:36:01.576 "md_size": 32, 00:36:01.576 "md_interleave": false, 00:36:01.576 "dif_type": 0, 00:36:01.576 "assigned_rate_limits": { 00:36:01.576 "rw_ios_per_sec": 0, 00:36:01.576 "rw_mbytes_per_sec": 0, 00:36:01.576 "r_mbytes_per_sec": 0, 00:36:01.576 "w_mbytes_per_sec": 0 00:36:01.576 }, 00:36:01.576 "claimed": true, 00:36:01.576 "claim_type": "exclusive_write", 00:36:01.576 "zoned": false, 00:36:01.577 "supported_io_types": { 00:36:01.577 "read": true, 00:36:01.577 "write": true, 00:36:01.577 "unmap": true, 00:36:01.577 "write_zeroes": true, 00:36:01.577 "flush": true, 00:36:01.577 "reset": true, 00:36:01.577 "compare": false, 00:36:01.577 "compare_and_write": false, 00:36:01.577 "abort": true, 00:36:01.577 "nvme_admin": false, 00:36:01.577 "nvme_io": false 00:36:01.577 }, 00:36:01.577 "memory_domains": [ 00:36:01.577 { 00:36:01.577 "dma_device_id": "system", 00:36:01.577 "dma_device_type": 1 00:36:01.577 }, 00:36:01.577 { 00:36:01.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:01.577 "dma_device_type": 2 00:36:01.577 } 00:36:01.577 ], 00:36:01.577 "driver_specific": { 00:36:01.577 "passthru": { 00:36:01.577 "name": "pt1", 00:36:01.577 "base_bdev_name": "malloc1" 00:36:01.577 } 00:36:01.577 } 00:36:01.577 }' 00:36:01.577 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:01.577 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:01.577 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:01.577 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:01.834 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:01.835 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:02.401 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:02.401 "name": "pt2", 00:36:02.401 "aliases": [ 00:36:02.401 "ca688fd9-ea26-595f-872e-c936ff9bf590" 00:36:02.401 ], 00:36:02.401 "product_name": "passthru", 00:36:02.401 "block_size": 4096, 00:36:02.401 "num_blocks": 8192, 00:36:02.401 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:02.401 "md_size": 32, 00:36:02.401 "md_interleave": false, 00:36:02.401 "dif_type": 0, 00:36:02.401 "assigned_rate_limits": { 00:36:02.401 "rw_ios_per_sec": 0, 00:36:02.401 "rw_mbytes_per_sec": 0, 00:36:02.401 "r_mbytes_per_sec": 0, 00:36:02.401 "w_mbytes_per_sec": 0 00:36:02.401 }, 00:36:02.401 "claimed": true, 00:36:02.401 "claim_type": "exclusive_write", 00:36:02.401 "zoned": false, 00:36:02.401 "supported_io_types": { 00:36:02.401 "read": true, 00:36:02.401 "write": true, 00:36:02.401 "unmap": true, 00:36:02.401 "write_zeroes": true, 00:36:02.401 "flush": true, 00:36:02.401 "reset": true, 00:36:02.401 "compare": false, 00:36:02.401 "compare_and_write": false, 00:36:02.401 "abort": true, 00:36:02.401 "nvme_admin": false, 00:36:02.401 "nvme_io": false 00:36:02.401 }, 00:36:02.401 "memory_domains": [ 00:36:02.401 { 00:36:02.401 "dma_device_id": "system", 00:36:02.401 "dma_device_type": 1 00:36:02.401 }, 00:36:02.401 { 00:36:02.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:02.401 "dma_device_type": 2 00:36:02.401 } 00:36:02.401 ], 00:36:02.401 "driver_specific": { 00:36:02.401 "passthru": { 00:36:02.401 "name": "pt2", 00:36:02.401 "base_bdev_name": "malloc2" 00:36:02.401 } 00:36:02.401 } 00:36:02.401 }' 00:36:02.401 12:18:00 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:02.401 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:02.660 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:02.660 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:36:02.660 [2024-07-21 12:18:01.443264] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:02.660 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=1ef4539f-14c0-4cbb-8805-4025f6e97994 00:36:02.660 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 1ef4539f-14c0-4cbb-8805-4025f6e97994 ']' 00:36:02.660 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:02.918 [2024-07-21 12:18:01.711180] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:02.918 [2024-07-21 12:18:01.711366] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:02.918 [2024-07-21 12:18:01.711581] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:02.918 [2024-07-21 12:18:01.711772] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:02.918 [2024-07-21 12:18:01.711893] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:36:02.918 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:36:02.918 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.176 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:36:03.176 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:36:03.176 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:03.176 12:18:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:03.435 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:03.435 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:03.693 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:03.693 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:03.693 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:36:03.693 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:03.693 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:36:03.693 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:03.693 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:03.694 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:36:03.951 [2024-07-21 12:18:02.783327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:03.951 [2024-07-21 12:18:02.785393] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:03.951 [2024-07-21 12:18:02.785659] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:03.951 [2024-07-21 12:18:02.786160] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:03.951 [2024-07-21 12:18:02.786510] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:03.951 [2024-07-21 12:18:02.786781] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:36:03.951 request: 00:36:03.951 { 00:36:03.951 "name": "raid_bdev1", 00:36:03.951 "raid_level": "raid1", 00:36:03.951 "base_bdevs": [ 00:36:03.951 "malloc1", 00:36:03.951 "malloc2" 00:36:03.951 ], 00:36:03.951 "superblock": false, 00:36:03.951 "method": "bdev_raid_create", 00:36:03.951 "req_id": 1 00:36:03.951 } 00:36:03.951 Got JSON-RPC error response 00:36:03.951 response: 00:36:03.951 { 00:36:03.951 "code": -17, 00:36:03.951 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:03.951 } 00:36:03.951 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:36:03.951 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:03.951 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:03.951 12:18:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:03.951 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:03.951 12:18:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:36:04.208 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:36:04.208 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:36:04.208 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:04.467 [2024-07-21 12:18:03.307442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:04.467 [2024-07-21 12:18:03.307752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:04.467 [2024-07-21 12:18:03.307953] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:36:04.467 [2024-07-21 12:18:03.308087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:04.467 [2024-07-21 12:18:03.310320] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:04.467 [2024-07-21 12:18:03.310530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:04.467 [2024-07-21 12:18:03.310767] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:04.467 [2024-07-21 12:18:03.310978] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:04.467 pt1 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:04.467 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:04.724 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:04.724 "name": "raid_bdev1", 00:36:04.724 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:04.724 "strip_size_kb": 0, 00:36:04.724 "state": "configuring", 00:36:04.724 "raid_level": "raid1", 00:36:04.724 "superblock": true, 00:36:04.724 "num_base_bdevs": 2, 00:36:04.724 "num_base_bdevs_discovered": 1, 00:36:04.724 "num_base_bdevs_operational": 2, 00:36:04.724 "base_bdevs_list": [ 00:36:04.724 { 00:36:04.724 "name": "pt1", 00:36:04.724 "uuid": "8429b143-766a-5de1-87cc-754fc4b39b4f", 00:36:04.724 "is_configured": true, 00:36:04.724 "data_offset": 256, 00:36:04.724 "data_size": 7936 00:36:04.724 }, 00:36:04.724 { 00:36:04.724 "name": null, 00:36:04.724 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:04.724 "is_configured": false, 00:36:04.724 "data_offset": 256, 00:36:04.724 "data_size": 7936 00:36:04.724 } 00:36:04.724 ] 00:36:04.724 }' 00:36:04.724 12:18:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:04.724 12:18:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:05.657 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:36:05.657 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:36:05.657 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:05.657 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:05.657 [2024-07-21 12:18:04.412848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:05.657 [2024-07-21 12:18:04.413128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:05.657 [2024-07-21 12:18:04.413204] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:05.657 [2024-07-21 12:18:04.413478] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:05.657 [2024-07-21 12:18:04.413746] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:05.657 [2024-07-21 12:18:04.413922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:05.657 [2024-07-21 12:18:04.414101] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:05.657 [2024-07-21 12:18:04.414220] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:05.657 [2024-07-21 12:18:04.414358] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:36:05.657 [2024-07-21 12:18:04.414482] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:05.658 [2024-07-21 12:18:04.414649] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:36:05.658 [2024-07-21 12:18:04.414871] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:36:05.658 [2024-07-21 12:18:04.414992] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:36:05.658 [2024-07-21 12:18:04.415159] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:05.658 pt2 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:05.658 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.916 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:05.916 "name": "raid_bdev1", 00:36:05.916 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:05.916 "strip_size_kb": 0, 00:36:05.916 "state": "online", 00:36:05.916 "raid_level": "raid1", 00:36:05.916 "superblock": true, 00:36:05.916 "num_base_bdevs": 2, 00:36:05.916 "num_base_bdevs_discovered": 2, 00:36:05.916 "num_base_bdevs_operational": 2, 00:36:05.916 "base_bdevs_list": [ 00:36:05.916 { 00:36:05.916 "name": "pt1", 00:36:05.916 "uuid": "8429b143-766a-5de1-87cc-754fc4b39b4f", 00:36:05.916 "is_configured": true, 00:36:05.916 "data_offset": 256, 00:36:05.916 "data_size": 7936 00:36:05.916 }, 00:36:05.916 { 00:36:05.916 "name": "pt2", 00:36:05.916 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:05.916 "is_configured": true, 00:36:05.916 "data_offset": 256, 00:36:05.916 "data_size": 7936 00:36:05.916 } 00:36:05.916 ] 00:36:05.916 }' 00:36:05.916 12:18:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:05.916 12:18:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:06.481 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:06.738 [2024-07-21 12:18:05.425224] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:06.738 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:06.738 "name": "raid_bdev1", 00:36:06.738 "aliases": [ 00:36:06.738 "1ef4539f-14c0-4cbb-8805-4025f6e97994" 00:36:06.738 ], 00:36:06.738 "product_name": "Raid Volume", 00:36:06.738 "block_size": 4096, 00:36:06.738 "num_blocks": 7936, 00:36:06.738 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:06.738 "md_size": 32, 00:36:06.738 "md_interleave": false, 00:36:06.738 "dif_type": 0, 00:36:06.738 "assigned_rate_limits": { 00:36:06.738 "rw_ios_per_sec": 0, 00:36:06.738 "rw_mbytes_per_sec": 0, 00:36:06.738 "r_mbytes_per_sec": 0, 00:36:06.738 "w_mbytes_per_sec": 0 00:36:06.738 }, 00:36:06.738 "claimed": false, 00:36:06.738 "zoned": false, 00:36:06.738 "supported_io_types": { 00:36:06.738 "read": true, 00:36:06.738 "write": true, 00:36:06.738 "unmap": false, 00:36:06.738 "write_zeroes": true, 00:36:06.738 "flush": false, 00:36:06.738 "reset": true, 00:36:06.738 "compare": false, 00:36:06.738 "compare_and_write": false, 00:36:06.738 "abort": false, 00:36:06.738 "nvme_admin": false, 00:36:06.738 "nvme_io": false 00:36:06.738 }, 00:36:06.738 "memory_domains": [ 00:36:06.738 { 00:36:06.738 "dma_device_id": "system", 00:36:06.738 "dma_device_type": 1 00:36:06.738 }, 00:36:06.738 { 00:36:06.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.738 "dma_device_type": 2 00:36:06.738 }, 00:36:06.738 { 00:36:06.738 "dma_device_id": "system", 00:36:06.738 "dma_device_type": 1 00:36:06.738 }, 00:36:06.738 { 00:36:06.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.738 "dma_device_type": 2 00:36:06.738 } 00:36:06.738 ], 00:36:06.738 "driver_specific": { 00:36:06.738 "raid": { 00:36:06.738 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:06.738 "strip_size_kb": 0, 00:36:06.738 "state": "online", 00:36:06.738 "raid_level": "raid1", 00:36:06.738 "superblock": true, 00:36:06.738 "num_base_bdevs": 2, 00:36:06.738 "num_base_bdevs_discovered": 2, 00:36:06.738 "num_base_bdevs_operational": 2, 00:36:06.738 "base_bdevs_list": [ 00:36:06.738 { 00:36:06.738 "name": "pt1", 00:36:06.738 "uuid": "8429b143-766a-5de1-87cc-754fc4b39b4f", 00:36:06.738 "is_configured": true, 00:36:06.738 "data_offset": 256, 00:36:06.738 "data_size": 7936 00:36:06.738 }, 00:36:06.738 { 00:36:06.738 "name": "pt2", 00:36:06.738 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:06.738 "is_configured": true, 00:36:06.739 "data_offset": 256, 00:36:06.739 "data_size": 7936 00:36:06.739 } 00:36:06.739 ] 00:36:06.739 } 00:36:06.739 } 00:36:06.739 }' 00:36:06.739 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:06.739 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:06.739 pt2' 00:36:06.739 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:06.739 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:06.739 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:06.996 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:06.996 "name": "pt1", 00:36:06.996 "aliases": [ 00:36:06.996 "8429b143-766a-5de1-87cc-754fc4b39b4f" 00:36:06.996 ], 00:36:06.996 "product_name": "passthru", 00:36:06.996 "block_size": 4096, 00:36:06.996 "num_blocks": 8192, 00:36:06.996 "uuid": "8429b143-766a-5de1-87cc-754fc4b39b4f", 00:36:06.996 "md_size": 32, 00:36:06.996 "md_interleave": false, 00:36:06.996 "dif_type": 0, 00:36:06.996 "assigned_rate_limits": { 00:36:06.996 "rw_ios_per_sec": 0, 00:36:06.996 "rw_mbytes_per_sec": 0, 00:36:06.996 "r_mbytes_per_sec": 0, 00:36:06.996 "w_mbytes_per_sec": 0 00:36:06.996 }, 00:36:06.996 "claimed": true, 00:36:06.996 "claim_type": "exclusive_write", 00:36:06.996 "zoned": false, 00:36:06.996 "supported_io_types": { 00:36:06.996 "read": true, 00:36:06.996 "write": true, 00:36:06.996 "unmap": true, 00:36:06.996 "write_zeroes": true, 00:36:06.996 "flush": true, 00:36:06.996 "reset": true, 00:36:06.996 "compare": false, 00:36:06.996 "compare_and_write": false, 00:36:06.996 "abort": true, 00:36:06.996 "nvme_admin": false, 00:36:06.996 "nvme_io": false 00:36:06.996 }, 00:36:06.996 "memory_domains": [ 00:36:06.996 { 00:36:06.996 "dma_device_id": "system", 00:36:06.996 "dma_device_type": 1 00:36:06.996 }, 00:36:06.996 { 00:36:06.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:06.996 "dma_device_type": 2 00:36:06.996 } 00:36:06.996 ], 00:36:06.996 "driver_specific": { 00:36:06.996 "passthru": { 00:36:06.996 "name": "pt1", 00:36:06.996 "base_bdev_name": "malloc1" 00:36:06.996 } 00:36:06.996 } 00:36:06.996 }' 00:36:06.996 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:06.996 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:06.996 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:06.996 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:06.996 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:07.254 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:07.254 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:07.254 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:07.254 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:07.254 12:18:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:07.254 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:07.254 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:07.254 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:07.254 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:07.254 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:07.510 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:07.510 "name": "pt2", 00:36:07.510 "aliases": [ 00:36:07.510 "ca688fd9-ea26-595f-872e-c936ff9bf590" 00:36:07.510 ], 00:36:07.510 "product_name": "passthru", 00:36:07.510 "block_size": 4096, 00:36:07.510 "num_blocks": 8192, 00:36:07.510 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:07.510 "md_size": 32, 00:36:07.510 "md_interleave": false, 00:36:07.510 "dif_type": 0, 00:36:07.510 "assigned_rate_limits": { 00:36:07.510 "rw_ios_per_sec": 0, 00:36:07.510 "rw_mbytes_per_sec": 0, 00:36:07.510 "r_mbytes_per_sec": 0, 00:36:07.510 "w_mbytes_per_sec": 0 00:36:07.510 }, 00:36:07.510 "claimed": true, 00:36:07.510 "claim_type": "exclusive_write", 00:36:07.510 "zoned": false, 00:36:07.510 "supported_io_types": { 00:36:07.510 "read": true, 00:36:07.510 "write": true, 00:36:07.510 "unmap": true, 00:36:07.510 "write_zeroes": true, 00:36:07.510 "flush": true, 00:36:07.510 "reset": true, 00:36:07.510 "compare": false, 00:36:07.510 "compare_and_write": false, 00:36:07.510 "abort": true, 00:36:07.510 "nvme_admin": false, 00:36:07.510 "nvme_io": false 00:36:07.510 }, 00:36:07.510 "memory_domains": [ 00:36:07.510 { 00:36:07.510 "dma_device_id": "system", 00:36:07.510 "dma_device_type": 1 00:36:07.510 }, 00:36:07.510 { 00:36:07.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:07.510 "dma_device_type": 2 00:36:07.510 } 00:36:07.510 ], 00:36:07.510 "driver_specific": { 00:36:07.510 "passthru": { 00:36:07.510 "name": "pt2", 00:36:07.510 "base_bdev_name": "malloc2" 00:36:07.510 } 00:36:07.510 } 00:36:07.510 }' 00:36:07.510 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:07.768 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:36:08.026 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:08.026 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:08.026 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:08.026 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:08.026 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:36:08.284 [2024-07-21 12:18:06.913524] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:08.284 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 1ef4539f-14c0-4cbb-8805-4025f6e97994 '!=' 1ef4539f-14c0-4cbb-8805-4025f6e97994 ']' 00:36:08.284 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:36:08.284 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:08.284 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:36:08.284 12:18:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:08.543 [2024-07-21 12:18:07.177438] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:08.543 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:08.801 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:08.801 "name": "raid_bdev1", 00:36:08.801 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:08.801 "strip_size_kb": 0, 00:36:08.801 "state": "online", 00:36:08.801 "raid_level": "raid1", 00:36:08.801 "superblock": true, 00:36:08.801 "num_base_bdevs": 2, 00:36:08.801 "num_base_bdevs_discovered": 1, 00:36:08.801 "num_base_bdevs_operational": 1, 00:36:08.801 "base_bdevs_list": [ 00:36:08.801 { 00:36:08.801 "name": null, 00:36:08.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:08.801 "is_configured": false, 00:36:08.801 "data_offset": 256, 00:36:08.801 "data_size": 7936 00:36:08.801 }, 00:36:08.801 { 00:36:08.801 "name": "pt2", 00:36:08.801 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:08.801 "is_configured": true, 00:36:08.801 "data_offset": 256, 00:36:08.801 "data_size": 7936 00:36:08.801 } 00:36:08.801 ] 00:36:08.801 }' 00:36:08.801 12:18:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:08.801 12:18:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:09.366 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:09.624 [2024-07-21 12:18:08.273604] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:09.624 [2024-07-21 12:18:08.273763] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:09.624 [2024-07-21 12:18:08.273926] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:09.624 [2024-07-21 12:18:08.274078] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:09.624 [2024-07-21 12:18:08.274178] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:36:09.624 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:09.624 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:36:09.624 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:36:09.624 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:36:09.624 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:36:09.624 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:09.624 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:09.883 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:36:09.883 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:36:09.883 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:36:09.883 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:36:09.883 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:36:09.883 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:10.141 [2024-07-21 12:18:08.913727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:10.141 [2024-07-21 12:18:08.913939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:10.141 [2024-07-21 12:18:08.914087] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:10.141 [2024-07-21 12:18:08.914254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:10.141 [2024-07-21 12:18:08.916334] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:10.141 [2024-07-21 12:18:08.916511] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:10.141 [2024-07-21 12:18:08.916683] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:10.141 [2024-07-21 12:18:08.916810] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:10.141 [2024-07-21 12:18:08.916919] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:36:10.141 [2024-07-21 12:18:08.917041] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:10.141 [2024-07-21 12:18:08.917161] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:36:10.141 [2024-07-21 12:18:08.917378] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:36:10.141 [2024-07-21 12:18:08.917487] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:36:10.141 [2024-07-21 12:18:08.917698] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:10.141 pt2 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:10.141 12:18:08 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:10.399 12:18:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:10.399 "name": "raid_bdev1", 00:36:10.399 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:10.399 "strip_size_kb": 0, 00:36:10.399 "state": "online", 00:36:10.399 "raid_level": "raid1", 00:36:10.399 "superblock": true, 00:36:10.399 "num_base_bdevs": 2, 00:36:10.399 "num_base_bdevs_discovered": 1, 00:36:10.399 "num_base_bdevs_operational": 1, 00:36:10.399 "base_bdevs_list": [ 00:36:10.399 { 00:36:10.399 "name": null, 00:36:10.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:10.399 "is_configured": false, 00:36:10.399 "data_offset": 256, 00:36:10.399 "data_size": 7936 00:36:10.399 }, 00:36:10.399 { 00:36:10.399 "name": "pt2", 00:36:10.399 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:10.399 "is_configured": true, 00:36:10.399 "data_offset": 256, 00:36:10.399 "data_size": 7936 00:36:10.399 } 00:36:10.399 ] 00:36:10.399 }' 00:36:10.399 12:18:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:10.399 12:18:09 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:10.963 12:18:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:11.221 [2024-07-21 12:18:09.975143] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:11.221 [2024-07-21 12:18:09.975297] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:11.221 [2024-07-21 12:18:09.975495] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:11.221 [2024-07-21 12:18:09.975669] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:11.221 [2024-07-21 12:18:09.975772] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:36:11.221 12:18:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.221 12:18:09 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:36:11.479 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:36:11.479 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:36:11.479 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:36:11.479 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:11.736 [2024-07-21 12:18:10.427265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:11.736 [2024-07-21 12:18:10.427516] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:11.736 [2024-07-21 12:18:10.427607] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:11.736 [2024-07-21 12:18:10.427873] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:11.736 [2024-07-21 12:18:10.430159] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:11.736 [2024-07-21 12:18:10.430323] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:11.736 [2024-07-21 12:18:10.430543] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:11.736 [2024-07-21 12:18:10.430712] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:11.736 [2024-07-21 12:18:10.430895] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:11.736 [2024-07-21 12:18:10.431048] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:11.736 [2024-07-21 12:18:10.431111] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:36:11.736 [2024-07-21 12:18:10.431364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:11.736 [2024-07-21 12:18:10.431594] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:36:11.736 [2024-07-21 12:18:10.431705] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:11.736 [2024-07-21 12:18:10.431823] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:11.736 [2024-07-21 12:18:10.432027] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:36:11.736 [2024-07-21 12:18:10.432152] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:36:11.736 [2024-07-21 12:18:10.432360] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:11.736 pt1 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:11.736 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:11.994 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:11.994 "name": "raid_bdev1", 00:36:11.994 "uuid": "1ef4539f-14c0-4cbb-8805-4025f6e97994", 00:36:11.994 "strip_size_kb": 0, 00:36:11.994 "state": "online", 00:36:11.994 "raid_level": "raid1", 00:36:11.994 "superblock": true, 00:36:11.994 "num_base_bdevs": 2, 00:36:11.994 "num_base_bdevs_discovered": 1, 00:36:11.994 "num_base_bdevs_operational": 1, 00:36:11.994 "base_bdevs_list": [ 00:36:11.994 { 00:36:11.994 "name": null, 00:36:11.994 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.994 "is_configured": false, 00:36:11.994 "data_offset": 256, 00:36:11.994 "data_size": 7936 00:36:11.994 }, 00:36:11.994 { 00:36:11.994 "name": "pt2", 00:36:11.994 "uuid": "ca688fd9-ea26-595f-872e-c936ff9bf590", 00:36:11.994 "is_configured": true, 00:36:11.994 "data_offset": 256, 00:36:11.994 "data_size": 7936 00:36:11.994 } 00:36:11.994 ] 00:36:11.994 }' 00:36:11.994 12:18:10 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:11.994 12:18:10 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:12.560 12:18:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:36:12.560 12:18:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:12.818 12:18:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:36:12.818 12:18:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:12.818 12:18:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:36:13.075 [2024-07-21 12:18:11.732144] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 1ef4539f-14c0-4cbb-8805-4025f6e97994 '!=' 1ef4539f-14c0-4cbb-8805-4025f6e97994 ']' 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 171010 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@946 -- # '[' -z 171010 ']' 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # kill -0 171010 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # uname 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 171010 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 171010' 00:36:13.075 killing process with pid 171010 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@965 -- # kill 171010 00:36:13.075 [2024-07-21 12:18:11.775889] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:13.075 12:18:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # wait 171010 00:36:13.075 [2024-07-21 12:18:11.776135] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:13.075 [2024-07-21 12:18:11.776298] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:13.075 [2024-07-21 12:18:11.776387] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:36:13.075 [2024-07-21 12:18:11.804278] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:13.333 12:18:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:36:13.333 00:36:13.333 real 0m14.375s 00:36:13.333 user 0m27.331s 00:36:13.333 sys 0m1.816s 00:36:13.333 12:18:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:13.333 12:18:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:13.333 ************************************ 00:36:13.333 END TEST raid_superblock_test_md_separate 00:36:13.333 ************************************ 00:36:13.333 12:18:12 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' true = true ']' 00:36:13.333 12:18:12 bdev_raid -- bdev/bdev_raid.sh@908 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:36:13.333 12:18:12 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:36:13.333 12:18:12 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:13.333 12:18:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:13.333 ************************************ 00:36:13.333 START TEST raid_rebuild_test_sb_md_separate 00:36:13.333 ************************************ 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false true 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local verify=true 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local strip_size 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local create_arg 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local data_offset 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # raid_pid=171515 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # waitforlisten 171515 /var/tmp/spdk-raid.sock 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@827 -- # '[' -z 171515 ']' 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:13.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:13.333 12:18:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:13.591 [2024-07-21 12:18:12.209848] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:13.591 [2024-07-21 12:18:12.210212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid171515 ] 00:36:13.591 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:13.591 Zero copy mechanism will not be used. 00:36:13.591 [2024-07-21 12:18:12.364746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.591 [2024-07-21 12:18:12.441035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.848 [2024-07-21 12:18:12.503493] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:14.412 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:14.412 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # return 0 00:36:14.412 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:14.412 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:36:14.669 BaseBdev1_malloc 00:36:14.669 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:14.926 [2024-07-21 12:18:13.628894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:14.926 [2024-07-21 12:18:13.629182] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:14.926 [2024-07-21 12:18:13.629399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:36:14.926 [2024-07-21 12:18:13.629552] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:14.926 [2024-07-21 12:18:13.631720] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:14.926 [2024-07-21 12:18:13.631911] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:14.926 BaseBdev1 00:36:14.926 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:36:14.926 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:36:15.183 BaseBdev2_malloc 00:36:15.183 12:18:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:15.440 [2024-07-21 12:18:14.096489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:15.440 [2024-07-21 12:18:14.096747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.440 [2024-07-21 12:18:14.096846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:15.440 [2024-07-21 12:18:14.097105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.440 [2024-07-21 12:18:14.099430] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.440 [2024-07-21 12:18:14.099608] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:15.440 BaseBdev2 00:36:15.440 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:36:15.440 spare_malloc 00:36:15.698 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:15.698 spare_delay 00:36:15.698 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:15.955 [2024-07-21 12:18:14.691604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:15.955 [2024-07-21 12:18:14.691827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:15.955 [2024-07-21 12:18:14.691900] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:36:15.955 [2024-07-21 12:18:14.692091] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:15.955 [2024-07-21 12:18:14.694172] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:15.955 [2024-07-21 12:18:14.694350] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:15.955 spare 00:36:15.955 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:36:16.213 [2024-07-21 12:18:14.883727] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:16.213 [2024-07-21 12:18:14.885749] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:16.213 [2024-07-21 12:18:14.886073] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:36:16.213 [2024-07-21 12:18:14.886191] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:16.213 [2024-07-21 12:18:14.886360] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:36:16.213 [2024-07-21 12:18:14.886632] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:36:16.213 [2024-07-21 12:18:14.886741] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:36:16.213 [2024-07-21 12:18:14.886919] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:16.213 12:18:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:16.470 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:16.470 "name": "raid_bdev1", 00:36:16.470 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:16.470 "strip_size_kb": 0, 00:36:16.470 "state": "online", 00:36:16.470 "raid_level": "raid1", 00:36:16.470 "superblock": true, 00:36:16.470 "num_base_bdevs": 2, 00:36:16.470 "num_base_bdevs_discovered": 2, 00:36:16.470 "num_base_bdevs_operational": 2, 00:36:16.470 "base_bdevs_list": [ 00:36:16.470 { 00:36:16.470 "name": "BaseBdev1", 00:36:16.470 "uuid": "06d46bc0-b99e-55ed-bf20-5fc6b01c8ad6", 00:36:16.470 "is_configured": true, 00:36:16.470 "data_offset": 256, 00:36:16.470 "data_size": 7936 00:36:16.470 }, 00:36:16.470 { 00:36:16.470 "name": "BaseBdev2", 00:36:16.470 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:16.470 "is_configured": true, 00:36:16.470 "data_offset": 256, 00:36:16.470 "data_size": 7936 00:36:16.470 } 00:36:16.470 ] 00:36:16.470 }' 00:36:16.470 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:16.470 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:17.036 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:17.036 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:36:17.293 [2024-07-21 12:18:15.940069] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:17.293 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:36:17.293 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:17.293 12:18:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@623 -- # '[' true = true ']' 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # local write_unit_size 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@627 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:17.551 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:17.551 [2024-07-21 12:18:16.387962] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:36:17.551 /dev/nbd0 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:17.808 1+0 records in 00:36:17.808 1+0 records out 00:36:17.808 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465261 s, 8.8 MB/s 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # '[' raid1 = raid5f ']' 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@632 -- # write_unit_size=1 00:36:17.808 12:18:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@634 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:36:18.373 7936+0 records in 00:36:18.373 7936+0 records out 00:36:18.373 32505856 bytes (33 MB, 31 MiB) copied, 0.684848 s, 47.5 MB/s 00:36:18.373 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:36:18.373 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:18.373 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:18.373 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:18.373 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:36:18.373 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:18.373 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:18.631 [2024-07-21 12:18:17.396458] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:18.631 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:36:18.888 [2024-07-21 12:18:17.580229] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:18.888 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:18.888 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:18.888 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:18.889 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:19.147 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:19.147 "name": "raid_bdev1", 00:36:19.147 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:19.147 "strip_size_kb": 0, 00:36:19.147 "state": "online", 00:36:19.147 "raid_level": "raid1", 00:36:19.147 "superblock": true, 00:36:19.147 "num_base_bdevs": 2, 00:36:19.147 "num_base_bdevs_discovered": 1, 00:36:19.147 "num_base_bdevs_operational": 1, 00:36:19.147 "base_bdevs_list": [ 00:36:19.147 { 00:36:19.147 "name": null, 00:36:19.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:19.147 "is_configured": false, 00:36:19.147 "data_offset": 256, 00:36:19.147 "data_size": 7936 00:36:19.147 }, 00:36:19.147 { 00:36:19.147 "name": "BaseBdev2", 00:36:19.147 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:19.147 "is_configured": true, 00:36:19.147 "data_offset": 256, 00:36:19.147 "data_size": 7936 00:36:19.147 } 00:36:19.147 ] 00:36:19.147 }' 00:36:19.147 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:19.147 12:18:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:19.761 12:18:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:19.761 [2024-07-21 12:18:18.572400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:19.761 [2024-07-21 12:18:18.574738] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019fe30 00:36:19.761 [2024-07-21 12:18:18.576769] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:19.761 12:18:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # sleep 1 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:21.146 "name": "raid_bdev1", 00:36:21.146 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:21.146 "strip_size_kb": 0, 00:36:21.146 "state": "online", 00:36:21.146 "raid_level": "raid1", 00:36:21.146 "superblock": true, 00:36:21.146 "num_base_bdevs": 2, 00:36:21.146 "num_base_bdevs_discovered": 2, 00:36:21.146 "num_base_bdevs_operational": 2, 00:36:21.146 "process": { 00:36:21.146 "type": "rebuild", 00:36:21.146 "target": "spare", 00:36:21.146 "progress": { 00:36:21.146 "blocks": 3072, 00:36:21.146 "percent": 38 00:36:21.146 } 00:36:21.146 }, 00:36:21.146 "base_bdevs_list": [ 00:36:21.146 { 00:36:21.146 "name": "spare", 00:36:21.146 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:21.146 "is_configured": true, 00:36:21.146 "data_offset": 256, 00:36:21.146 "data_size": 7936 00:36:21.146 }, 00:36:21.146 { 00:36:21.146 "name": "BaseBdev2", 00:36:21.146 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:21.146 "is_configured": true, 00:36:21.146 "data_offset": 256, 00:36:21.146 "data_size": 7936 00:36:21.146 } 00:36:21.146 ] 00:36:21.146 }' 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:21.146 12:18:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:21.405 [2024-07-21 12:18:20.198037] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:21.664 [2024-07-21 12:18:20.286524] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:21.664 [2024-07-21 12:18:20.286800] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:21.664 [2024-07-21 12:18:20.286860] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:21.664 [2024-07-21 12:18:20.287015] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:21.664 "name": "raid_bdev1", 00:36:21.664 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:21.664 "strip_size_kb": 0, 00:36:21.664 "state": "online", 00:36:21.664 "raid_level": "raid1", 00:36:21.664 "superblock": true, 00:36:21.664 "num_base_bdevs": 2, 00:36:21.664 "num_base_bdevs_discovered": 1, 00:36:21.664 "num_base_bdevs_operational": 1, 00:36:21.664 "base_bdevs_list": [ 00:36:21.664 { 00:36:21.664 "name": null, 00:36:21.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:21.664 "is_configured": false, 00:36:21.664 "data_offset": 256, 00:36:21.664 "data_size": 7936 00:36:21.664 }, 00:36:21.664 { 00:36:21.664 "name": "BaseBdev2", 00:36:21.664 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:21.664 "is_configured": true, 00:36:21.664 "data_offset": 256, 00:36:21.664 "data_size": 7936 00:36:21.664 } 00:36:21.664 ] 00:36:21.664 }' 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:21.664 12:18:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.599 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:22.599 "name": "raid_bdev1", 00:36:22.599 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:22.599 "strip_size_kb": 0, 00:36:22.599 "state": "online", 00:36:22.599 "raid_level": "raid1", 00:36:22.599 "superblock": true, 00:36:22.599 "num_base_bdevs": 2, 00:36:22.599 "num_base_bdevs_discovered": 1, 00:36:22.599 "num_base_bdevs_operational": 1, 00:36:22.600 "base_bdevs_list": [ 00:36:22.600 { 00:36:22.600 "name": null, 00:36:22.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:22.600 "is_configured": false, 00:36:22.600 "data_offset": 256, 00:36:22.600 "data_size": 7936 00:36:22.600 }, 00:36:22.600 { 00:36:22.600 "name": "BaseBdev2", 00:36:22.600 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:22.600 "is_configured": true, 00:36:22.600 "data_offset": 256, 00:36:22.600 "data_size": 7936 00:36:22.600 } 00:36:22.600 ] 00:36:22.600 }' 00:36:22.600 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:22.600 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:22.600 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:22.858 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:22.858 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:23.116 [2024-07-21 12:18:21.750844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:23.116 [2024-07-21 12:18:21.753087] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ffd0 00:36:23.116 [2024-07-21 12:18:21.755115] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:23.116 12:18:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # sleep 1 00:36:24.049 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:24.049 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:24.049 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:24.049 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:24.049 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:24.049 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.049 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.307 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:24.307 "name": "raid_bdev1", 00:36:24.307 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:24.307 "strip_size_kb": 0, 00:36:24.307 "state": "online", 00:36:24.307 "raid_level": "raid1", 00:36:24.307 "superblock": true, 00:36:24.307 "num_base_bdevs": 2, 00:36:24.307 "num_base_bdevs_discovered": 2, 00:36:24.307 "num_base_bdevs_operational": 2, 00:36:24.307 "process": { 00:36:24.307 "type": "rebuild", 00:36:24.307 "target": "spare", 00:36:24.307 "progress": { 00:36:24.307 "blocks": 2816, 00:36:24.307 "percent": 35 00:36:24.307 } 00:36:24.307 }, 00:36:24.307 "base_bdevs_list": [ 00:36:24.307 { 00:36:24.307 "name": "spare", 00:36:24.307 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:24.307 "is_configured": true, 00:36:24.307 "data_offset": 256, 00:36:24.307 "data_size": 7936 00:36:24.307 }, 00:36:24.307 { 00:36:24.307 "name": "BaseBdev2", 00:36:24.307 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:24.307 "is_configured": true, 00:36:24.307 "data_offset": 256, 00:36:24.307 "data_size": 7936 00:36:24.307 } 00:36:24.307 ] 00:36:24.307 }' 00:36:24.307 12:18:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:36:24.307 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@705 -- # local timeout=1388 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:24.307 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:24.308 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:24.308 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:24.308 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:24.565 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:24.565 "name": "raid_bdev1", 00:36:24.565 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:24.565 "strip_size_kb": 0, 00:36:24.565 "state": "online", 00:36:24.565 "raid_level": "raid1", 00:36:24.565 "superblock": true, 00:36:24.565 "num_base_bdevs": 2, 00:36:24.566 "num_base_bdevs_discovered": 2, 00:36:24.566 "num_base_bdevs_operational": 2, 00:36:24.566 "process": { 00:36:24.566 "type": "rebuild", 00:36:24.566 "target": "spare", 00:36:24.566 "progress": { 00:36:24.566 "blocks": 3584, 00:36:24.566 "percent": 45 00:36:24.566 } 00:36:24.566 }, 00:36:24.566 "base_bdevs_list": [ 00:36:24.566 { 00:36:24.566 "name": "spare", 00:36:24.566 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:24.566 "is_configured": true, 00:36:24.566 "data_offset": 256, 00:36:24.566 "data_size": 7936 00:36:24.566 }, 00:36:24.566 { 00:36:24.566 "name": "BaseBdev2", 00:36:24.566 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:24.566 "is_configured": true, 00:36:24.566 "data_offset": 256, 00:36:24.566 "data_size": 7936 00:36:24.566 } 00:36:24.566 ] 00:36:24.566 }' 00:36:24.566 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:24.566 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:24.566 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:24.566 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:24.566 12:18:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:25.937 "name": "raid_bdev1", 00:36:25.937 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:25.937 "strip_size_kb": 0, 00:36:25.937 "state": "online", 00:36:25.937 "raid_level": "raid1", 00:36:25.937 "superblock": true, 00:36:25.937 "num_base_bdevs": 2, 00:36:25.937 "num_base_bdevs_discovered": 2, 00:36:25.937 "num_base_bdevs_operational": 2, 00:36:25.937 "process": { 00:36:25.937 "type": "rebuild", 00:36:25.937 "target": "spare", 00:36:25.937 "progress": { 00:36:25.937 "blocks": 7168, 00:36:25.937 "percent": 90 00:36:25.937 } 00:36:25.937 }, 00:36:25.937 "base_bdevs_list": [ 00:36:25.937 { 00:36:25.937 "name": "spare", 00:36:25.937 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:25.937 "is_configured": true, 00:36:25.937 "data_offset": 256, 00:36:25.937 "data_size": 7936 00:36:25.937 }, 00:36:25.937 { 00:36:25.937 "name": "BaseBdev2", 00:36:25.937 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:25.937 "is_configured": true, 00:36:25.937 "data_offset": 256, 00:36:25.937 "data_size": 7936 00:36:25.937 } 00:36:25.937 ] 00:36:25.937 }' 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:25.937 12:18:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@710 -- # sleep 1 00:36:26.195 [2024-07-21 12:18:24.871210] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:26.195 [2024-07-21 12:18:24.871413] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:26.195 [2024-07-21 12:18:24.871695] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:27.125 "name": "raid_bdev1", 00:36:27.125 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:27.125 "strip_size_kb": 0, 00:36:27.125 "state": "online", 00:36:27.125 "raid_level": "raid1", 00:36:27.125 "superblock": true, 00:36:27.125 "num_base_bdevs": 2, 00:36:27.125 "num_base_bdevs_discovered": 2, 00:36:27.125 "num_base_bdevs_operational": 2, 00:36:27.125 "base_bdevs_list": [ 00:36:27.125 { 00:36:27.125 "name": "spare", 00:36:27.125 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:27.125 "is_configured": true, 00:36:27.125 "data_offset": 256, 00:36:27.125 "data_size": 7936 00:36:27.125 }, 00:36:27.125 { 00:36:27.125 "name": "BaseBdev2", 00:36:27.125 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:27.125 "is_configured": true, 00:36:27.125 "data_offset": 256, 00:36:27.125 "data_size": 7936 00:36:27.125 } 00:36:27.125 ] 00:36:27.125 }' 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:27.125 12:18:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # break 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.382 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:27.639 "name": "raid_bdev1", 00:36:27.639 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:27.639 "strip_size_kb": 0, 00:36:27.639 "state": "online", 00:36:27.639 "raid_level": "raid1", 00:36:27.639 "superblock": true, 00:36:27.639 "num_base_bdevs": 2, 00:36:27.639 "num_base_bdevs_discovered": 2, 00:36:27.639 "num_base_bdevs_operational": 2, 00:36:27.639 "base_bdevs_list": [ 00:36:27.639 { 00:36:27.639 "name": "spare", 00:36:27.639 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:27.639 "is_configured": true, 00:36:27.639 "data_offset": 256, 00:36:27.639 "data_size": 7936 00:36:27.639 }, 00:36:27.639 { 00:36:27.639 "name": "BaseBdev2", 00:36:27.639 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:27.639 "is_configured": true, 00:36:27.639 "data_offset": 256, 00:36:27.639 "data_size": 7936 00:36:27.639 } 00:36:27.639 ] 00:36:27.639 }' 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:27.639 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:27.896 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:27.896 "name": "raid_bdev1", 00:36:27.896 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:27.896 "strip_size_kb": 0, 00:36:27.896 "state": "online", 00:36:27.896 "raid_level": "raid1", 00:36:27.896 "superblock": true, 00:36:27.896 "num_base_bdevs": 2, 00:36:27.896 "num_base_bdevs_discovered": 2, 00:36:27.896 "num_base_bdevs_operational": 2, 00:36:27.896 "base_bdevs_list": [ 00:36:27.896 { 00:36:27.896 "name": "spare", 00:36:27.896 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:27.896 "is_configured": true, 00:36:27.896 "data_offset": 256, 00:36:27.896 "data_size": 7936 00:36:27.896 }, 00:36:27.896 { 00:36:27.896 "name": "BaseBdev2", 00:36:27.896 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:27.896 "is_configured": true, 00:36:27.896 "data_offset": 256, 00:36:27.896 "data_size": 7936 00:36:27.896 } 00:36:27.896 ] 00:36:27.896 }' 00:36:27.896 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:27.896 12:18:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:28.460 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:28.717 [2024-07-21 12:18:27.451288] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:28.717 [2024-07-21 12:18:27.451443] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:28.717 [2024-07-21 12:18:27.451635] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:28.717 [2024-07-21 12:18:27.451879] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:28.717 [2024-07-21 12:18:27.451998] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:36:28.717 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # jq length 00:36:28.717 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@721 -- # '[' true = true ']' 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@736 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:28.979 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:29.239 /dev/nbd0 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:29.239 1+0 records in 00:36:29.239 1+0 records out 00:36:29.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483454 s, 8.5 MB/s 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:29.239 12:18:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:36:29.498 /dev/nbd1 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@865 -- # local i 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # break 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:29.498 1+0 records in 00:36:29.498 1+0 records out 00:36:29.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612063 s, 6.7 MB/s 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@882 -- # size=4096 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # return 0 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:29.498 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:29.755 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:36:30.013 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:30.271 12:18:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:30.530 [2024-07-21 12:18:29.231156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:30.530 [2024-07-21 12:18:29.231387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:30.530 [2024-07-21 12:18:29.231466] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:36:30.530 [2024-07-21 12:18:29.231723] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:30.530 [2024-07-21 12:18:29.233927] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:30.530 [2024-07-21 12:18:29.234128] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:30.530 [2024-07-21 12:18:29.234334] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:30.530 [2024-07-21 12:18:29.234532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:30.530 [2024-07-21 12:18:29.234862] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:30.530 spare 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:30.530 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.530 [2024-07-21 12:18:29.335116] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a280 00:36:30.530 [2024-07-21 12:18:29.335282] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:30.530 [2024-07-21 12:18:29.335469] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c19b0 00:36:30.530 [2024-07-21 12:18:29.335740] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a280 00:36:30.530 [2024-07-21 12:18:29.335850] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a280 00:36:30.530 [2024-07-21 12:18:29.336031] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:30.788 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:30.788 "name": "raid_bdev1", 00:36:30.788 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:30.788 "strip_size_kb": 0, 00:36:30.788 "state": "online", 00:36:30.788 "raid_level": "raid1", 00:36:30.788 "superblock": true, 00:36:30.788 "num_base_bdevs": 2, 00:36:30.788 "num_base_bdevs_discovered": 2, 00:36:30.788 "num_base_bdevs_operational": 2, 00:36:30.788 "base_bdevs_list": [ 00:36:30.788 { 00:36:30.788 "name": "spare", 00:36:30.788 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:30.788 "is_configured": true, 00:36:30.788 "data_offset": 256, 00:36:30.788 "data_size": 7936 00:36:30.788 }, 00:36:30.788 { 00:36:30.788 "name": "BaseBdev2", 00:36:30.788 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:30.788 "is_configured": true, 00:36:30.788 "data_offset": 256, 00:36:30.788 "data_size": 7936 00:36:30.788 } 00:36:30.788 ] 00:36:30.788 }' 00:36:30.788 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:30.788 12:18:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:31.355 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:31.355 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:31.355 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:31.355 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:31.355 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:31.355 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.355 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.613 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:31.613 "name": "raid_bdev1", 00:36:31.613 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:31.613 "strip_size_kb": 0, 00:36:31.613 "state": "online", 00:36:31.613 "raid_level": "raid1", 00:36:31.613 "superblock": true, 00:36:31.613 "num_base_bdevs": 2, 00:36:31.613 "num_base_bdevs_discovered": 2, 00:36:31.613 "num_base_bdevs_operational": 2, 00:36:31.613 "base_bdevs_list": [ 00:36:31.613 { 00:36:31.613 "name": "spare", 00:36:31.613 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:31.613 "is_configured": true, 00:36:31.613 "data_offset": 256, 00:36:31.613 "data_size": 7936 00:36:31.613 }, 00:36:31.613 { 00:36:31.613 "name": "BaseBdev2", 00:36:31.613 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:31.613 "is_configured": true, 00:36:31.613 "data_offset": 256, 00:36:31.613 "data_size": 7936 00:36:31.613 } 00:36:31.613 ] 00:36:31.613 }' 00:36:31.613 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:31.613 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:31.613 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:31.613 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:31.613 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:31.613 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:31.870 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:36:31.870 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:36:32.128 [2024-07-21 12:18:30.970680] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:32.128 12:18:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:32.385 12:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:32.385 "name": "raid_bdev1", 00:36:32.385 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:32.385 "strip_size_kb": 0, 00:36:32.385 "state": "online", 00:36:32.385 "raid_level": "raid1", 00:36:32.385 "superblock": true, 00:36:32.385 "num_base_bdevs": 2, 00:36:32.385 "num_base_bdevs_discovered": 1, 00:36:32.385 "num_base_bdevs_operational": 1, 00:36:32.385 "base_bdevs_list": [ 00:36:32.385 { 00:36:32.385 "name": null, 00:36:32.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:32.385 "is_configured": false, 00:36:32.385 "data_offset": 256, 00:36:32.385 "data_size": 7936 00:36:32.385 }, 00:36:32.385 { 00:36:32.385 "name": "BaseBdev2", 00:36:32.385 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:32.385 "is_configured": true, 00:36:32.385 "data_offset": 256, 00:36:32.385 "data_size": 7936 00:36:32.385 } 00:36:32.385 ] 00:36:32.385 }' 00:36:32.385 12:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:32.385 12:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:33.319 12:18:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:36:33.319 [2024-07-21 12:18:32.030901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:33.319 [2024-07-21 12:18:32.031263] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:33.319 [2024-07-21 12:18:32.031393] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:33.319 [2024-07-21 12:18:32.031489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:33.319 [2024-07-21 12:18:32.033579] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:36:33.319 [2024-07-21 12:18:32.035593] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:33.319 12:18:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # sleep 1 00:36:34.253 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:34.253 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:34.253 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:34.253 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:34.254 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:34.254 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:34.254 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.511 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:34.511 "name": "raid_bdev1", 00:36:34.511 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:34.511 "strip_size_kb": 0, 00:36:34.511 "state": "online", 00:36:34.511 "raid_level": "raid1", 00:36:34.511 "superblock": true, 00:36:34.511 "num_base_bdevs": 2, 00:36:34.511 "num_base_bdevs_discovered": 2, 00:36:34.511 "num_base_bdevs_operational": 2, 00:36:34.511 "process": { 00:36:34.511 "type": "rebuild", 00:36:34.511 "target": "spare", 00:36:34.511 "progress": { 00:36:34.512 "blocks": 3072, 00:36:34.512 "percent": 38 00:36:34.512 } 00:36:34.512 }, 00:36:34.512 "base_bdevs_list": [ 00:36:34.512 { 00:36:34.512 "name": "spare", 00:36:34.512 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:34.512 "is_configured": true, 00:36:34.512 "data_offset": 256, 00:36:34.512 "data_size": 7936 00:36:34.512 }, 00:36:34.512 { 00:36:34.512 "name": "BaseBdev2", 00:36:34.512 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:34.512 "is_configured": true, 00:36:34.512 "data_offset": 256, 00:36:34.512 "data_size": 7936 00:36:34.512 } 00:36:34.512 ] 00:36:34.512 }' 00:36:34.512 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:34.512 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:34.512 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:34.512 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:34.512 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:34.769 [2024-07-21 12:18:33.600697] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:35.027 [2024-07-21 12:18:33.643716] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:35.027 [2024-07-21 12:18:33.643914] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:35.027 [2024-07-21 12:18:33.643969] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:35.027 [2024-07-21 12:18:33.644094] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:35.027 "name": "raid_bdev1", 00:36:35.027 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:35.027 "strip_size_kb": 0, 00:36:35.027 "state": "online", 00:36:35.027 "raid_level": "raid1", 00:36:35.027 "superblock": true, 00:36:35.027 "num_base_bdevs": 2, 00:36:35.027 "num_base_bdevs_discovered": 1, 00:36:35.027 "num_base_bdevs_operational": 1, 00:36:35.027 "base_bdevs_list": [ 00:36:35.027 { 00:36:35.027 "name": null, 00:36:35.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:35.027 "is_configured": false, 00:36:35.027 "data_offset": 256, 00:36:35.027 "data_size": 7936 00:36:35.027 }, 00:36:35.027 { 00:36:35.027 "name": "BaseBdev2", 00:36:35.027 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:35.027 "is_configured": true, 00:36:35.027 "data_offset": 256, 00:36:35.027 "data_size": 7936 00:36:35.027 } 00:36:35.027 ] 00:36:35.027 }' 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:35.027 12:18:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:35.957 12:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:36:35.958 [2024-07-21 12:18:34.726853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:35.958 [2024-07-21 12:18:34.727096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:35.958 [2024-07-21 12:18:34.727168] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:36:35.958 [2024-07-21 12:18:34.727424] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:35.958 [2024-07-21 12:18:34.727703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:35.958 [2024-07-21 12:18:34.727851] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:35.958 [2024-07-21 12:18:34.728044] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:35.958 [2024-07-21 12:18:34.728166] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:35.958 [2024-07-21 12:18:34.728271] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:35.958 [2024-07-21 12:18:34.728359] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:35.958 [2024-07-21 12:18:34.730047] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1e90 00:36:35.958 [2024-07-21 12:18:34.732122] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:35.958 spare 00:36:35.958 12:18:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # sleep 1 00:36:36.890 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:36.890 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:36.890 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:36:36.890 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=spare 00:36:36.890 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:36.890 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:36.890 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.148 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:37.148 "name": "raid_bdev1", 00:36:37.148 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:37.148 "strip_size_kb": 0, 00:36:37.148 "state": "online", 00:36:37.148 "raid_level": "raid1", 00:36:37.148 "superblock": true, 00:36:37.148 "num_base_bdevs": 2, 00:36:37.148 "num_base_bdevs_discovered": 2, 00:36:37.148 "num_base_bdevs_operational": 2, 00:36:37.148 "process": { 00:36:37.148 "type": "rebuild", 00:36:37.148 "target": "spare", 00:36:37.148 "progress": { 00:36:37.148 "blocks": 3072, 00:36:37.148 "percent": 38 00:36:37.148 } 00:36:37.148 }, 00:36:37.148 "base_bdevs_list": [ 00:36:37.148 { 00:36:37.148 "name": "spare", 00:36:37.148 "uuid": "f11b19c9-237d-5154-b8a1-0ec0185c59aa", 00:36:37.148 "is_configured": true, 00:36:37.148 "data_offset": 256, 00:36:37.148 "data_size": 7936 00:36:37.148 }, 00:36:37.148 { 00:36:37.148 "name": "BaseBdev2", 00:36:37.148 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:37.148 "is_configured": true, 00:36:37.148 "data_offset": 256, 00:36:37.148 "data_size": 7936 00:36:37.148 } 00:36:37.148 ] 00:36:37.148 }' 00:36:37.148 12:18:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:37.407 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:37.407 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:37.407 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:36:37.407 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:36:37.665 [2024-07-21 12:18:36.345178] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:37.665 [2024-07-21 12:18:36.440734] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:37.665 [2024-07-21 12:18:36.440931] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:37.665 [2024-07-21 12:18:36.441057] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:37.665 [2024-07-21 12:18:36.441100] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:37.665 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:37.923 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:37.923 "name": "raid_bdev1", 00:36:37.923 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:37.923 "strip_size_kb": 0, 00:36:37.923 "state": "online", 00:36:37.923 "raid_level": "raid1", 00:36:37.923 "superblock": true, 00:36:37.923 "num_base_bdevs": 2, 00:36:37.923 "num_base_bdevs_discovered": 1, 00:36:37.923 "num_base_bdevs_operational": 1, 00:36:37.923 "base_bdevs_list": [ 00:36:37.923 { 00:36:37.923 "name": null, 00:36:37.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:37.923 "is_configured": false, 00:36:37.923 "data_offset": 256, 00:36:37.923 "data_size": 7936 00:36:37.923 }, 00:36:37.923 { 00:36:37.923 "name": "BaseBdev2", 00:36:37.923 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:37.923 "is_configured": true, 00:36:37.923 "data_offset": 256, 00:36:37.923 "data_size": 7936 00:36:37.923 } 00:36:37.923 ] 00:36:37.923 }' 00:36:37.923 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:37.923 12:18:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:38.491 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:38.491 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:38.491 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:38.491 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:38.491 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:38.491 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:38.491 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.749 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:38.749 "name": "raid_bdev1", 00:36:38.749 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:38.749 "strip_size_kb": 0, 00:36:38.749 "state": "online", 00:36:38.749 "raid_level": "raid1", 00:36:38.749 "superblock": true, 00:36:38.749 "num_base_bdevs": 2, 00:36:38.749 "num_base_bdevs_discovered": 1, 00:36:38.749 "num_base_bdevs_operational": 1, 00:36:38.749 "base_bdevs_list": [ 00:36:38.749 { 00:36:38.749 "name": null, 00:36:38.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.749 "is_configured": false, 00:36:38.749 "data_offset": 256, 00:36:38.749 "data_size": 7936 00:36:38.749 }, 00:36:38.749 { 00:36:38.749 "name": "BaseBdev2", 00:36:38.749 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:38.749 "is_configured": true, 00:36:38.749 "data_offset": 256, 00:36:38.749 "data_size": 7936 00:36:38.749 } 00:36:38.749 ] 00:36:38.749 }' 00:36:38.749 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:38.750 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:38.750 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:39.008 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:39.008 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:36:39.266 12:18:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:39.527 [2024-07-21 12:18:38.150554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:39.527 [2024-07-21 12:18:38.150821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:39.527 [2024-07-21 12:18:38.150991] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:36:39.527 [2024-07-21 12:18:38.151148] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:39.527 [2024-07-21 12:18:38.151480] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:39.527 [2024-07-21 12:18:38.151648] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:39.527 [2024-07-21 12:18:38.151822] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:39.527 [2024-07-21 12:18:38.151938] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:39.527 [2024-07-21 12:18:38.152030] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:39.527 BaseBdev1 00:36:39.527 12:18:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # sleep 1 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:40.461 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.718 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:40.718 "name": "raid_bdev1", 00:36:40.718 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:40.718 "strip_size_kb": 0, 00:36:40.718 "state": "online", 00:36:40.718 "raid_level": "raid1", 00:36:40.718 "superblock": true, 00:36:40.718 "num_base_bdevs": 2, 00:36:40.718 "num_base_bdevs_discovered": 1, 00:36:40.718 "num_base_bdevs_operational": 1, 00:36:40.718 "base_bdevs_list": [ 00:36:40.718 { 00:36:40.718 "name": null, 00:36:40.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.718 "is_configured": false, 00:36:40.718 "data_offset": 256, 00:36:40.718 "data_size": 7936 00:36:40.718 }, 00:36:40.718 { 00:36:40.718 "name": "BaseBdev2", 00:36:40.718 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:40.718 "is_configured": true, 00:36:40.718 "data_offset": 256, 00:36:40.718 "data_size": 7936 00:36:40.718 } 00:36:40.718 ] 00:36:40.718 }' 00:36:40.718 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:40.718 12:18:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:41.282 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:41.282 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:41.283 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:41.283 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:41.283 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:41.283 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:41.283 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.540 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:41.540 "name": "raid_bdev1", 00:36:41.540 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:41.540 "strip_size_kb": 0, 00:36:41.540 "state": "online", 00:36:41.540 "raid_level": "raid1", 00:36:41.540 "superblock": true, 00:36:41.540 "num_base_bdevs": 2, 00:36:41.540 "num_base_bdevs_discovered": 1, 00:36:41.540 "num_base_bdevs_operational": 1, 00:36:41.540 "base_bdevs_list": [ 00:36:41.540 { 00:36:41.540 "name": null, 00:36:41.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.540 "is_configured": false, 00:36:41.540 "data_offset": 256, 00:36:41.540 "data_size": 7936 00:36:41.540 }, 00:36:41.540 { 00:36:41.540 "name": "BaseBdev2", 00:36:41.540 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:41.540 "is_configured": true, 00:36:41.540 "data_offset": 256, 00:36:41.540 "data_size": 7936 00:36:41.540 } 00:36:41.540 ] 00:36:41.540 }' 00:36:41.541 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:41.541 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:41.541 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:36:41.799 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:42.057 [2024-07-21 12:18:40.670891] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:42.057 [2024-07-21 12:18:40.671200] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:42.057 [2024-07-21 12:18:40.671324] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:42.057 request: 00:36:42.057 { 00:36:42.057 "raid_bdev": "raid_bdev1", 00:36:42.057 "base_bdev": "BaseBdev1", 00:36:42.057 "method": "bdev_raid_add_base_bdev", 00:36:42.057 "req_id": 1 00:36:42.057 } 00:36:42.057 Got JSON-RPC error response 00:36:42.057 response: 00:36:42.057 { 00:36:42.057 "code": -22, 00:36:42.057 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:42.057 } 00:36:42.057 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@651 -- # es=1 00:36:42.057 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:42.057 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:42.057 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:42.057 12:18:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # sleep 1 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:42.992 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.250 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:43.250 "name": "raid_bdev1", 00:36:43.250 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:43.250 "strip_size_kb": 0, 00:36:43.250 "state": "online", 00:36:43.250 "raid_level": "raid1", 00:36:43.250 "superblock": true, 00:36:43.250 "num_base_bdevs": 2, 00:36:43.250 "num_base_bdevs_discovered": 1, 00:36:43.250 "num_base_bdevs_operational": 1, 00:36:43.250 "base_bdevs_list": [ 00:36:43.250 { 00:36:43.250 "name": null, 00:36:43.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:43.250 "is_configured": false, 00:36:43.250 "data_offset": 256, 00:36:43.250 "data_size": 7936 00:36:43.250 }, 00:36:43.250 { 00:36:43.250 "name": "BaseBdev2", 00:36:43.250 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:43.250 "is_configured": true, 00:36:43.250 "data_offset": 256, 00:36:43.250 "data_size": 7936 00:36:43.250 } 00:36:43.250 ] 00:36:43.250 }' 00:36:43.250 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:43.250 12:18:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:43.818 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:43.818 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:36:43.818 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:36:43.818 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local target=none 00:36:43.818 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:36:43.818 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:43.818 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.077 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:44.077 "name": "raid_bdev1", 00:36:44.077 "uuid": "75067cb0-f292-4669-b5a2-0d1835337682", 00:36:44.077 "strip_size_kb": 0, 00:36:44.077 "state": "online", 00:36:44.077 "raid_level": "raid1", 00:36:44.077 "superblock": true, 00:36:44.077 "num_base_bdevs": 2, 00:36:44.077 "num_base_bdevs_discovered": 1, 00:36:44.077 "num_base_bdevs_operational": 1, 00:36:44.077 "base_bdevs_list": [ 00:36:44.077 { 00:36:44.077 "name": null, 00:36:44.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.077 "is_configured": false, 00:36:44.077 "data_offset": 256, 00:36:44.077 "data_size": 7936 00:36:44.077 }, 00:36:44.077 { 00:36:44.077 "name": "BaseBdev2", 00:36:44.077 "uuid": "6275cbbc-7bf1-5479-8fd7-283549e60e23", 00:36:44.077 "is_configured": true, 00:36:44.077 "data_offset": 256, 00:36:44.077 "data_size": 7936 00:36:44.077 } 00:36:44.077 ] 00:36:44.077 }' 00:36:44.078 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:36:44.078 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:36:44.078 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:36:44.334 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:36:44.334 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@782 -- # killprocess 171515 00:36:44.334 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@946 -- # '[' -z 171515 ']' 00:36:44.334 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # kill -0 171515 00:36:44.334 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # uname 00:36:44.334 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:44.334 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 171515 00:36:44.334 killing process with pid 171515 00:36:44.334 Received shutdown signal, test time was about 60.000000 seconds 00:36:44.335 00:36:44.335 Latency(us) 00:36:44.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.335 =================================================================================================================== 00:36:44.335 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:44.335 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:44.335 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:44.335 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # echo 'killing process with pid 171515' 00:36:44.335 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@965 -- # kill 171515 00:36:44.335 12:18:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # wait 171515 00:36:44.335 [2024-07-21 12:18:42.969688] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:44.335 [2024-07-21 12:18:42.969816] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:44.335 [2024-07-21 12:18:42.969915] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:44.335 [2024-07-21 12:18:42.969927] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state offline 00:36:44.335 [2024-07-21 12:18:42.998641] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:44.593 12:18:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # return 0 00:36:44.593 00:36:44.593 real 0m31.073s 00:36:44.593 user 0m50.178s 00:36:44.593 ************************************ 00:36:44.593 END TEST raid_rebuild_test_sb_md_separate 00:36:44.593 ************************************ 00:36:44.593 sys 0m3.339s 00:36:44.593 12:18:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:44.593 12:18:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:44.593 12:18:43 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:36:44.593 12:18:43 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:36:44.593 12:18:43 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:36:44.593 12:18:43 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:44.593 12:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:44.593 ************************************ 00:36:44.593 START TEST raid_state_function_test_sb_md_interleaved 00:36:44.593 ************************************ 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_state_function_test raid1 2 true 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev1 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # echo BaseBdev2 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=172383 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 172383' 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:36:44.593 Process raid pid: 172383 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 172383 /var/tmp/spdk-raid.sock 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 172383 ']' 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:44.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:44.593 12:18:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:44.593 [2024-07-21 12:18:43.363644] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:44.593 [2024-07-21 12:18:43.364173] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:44.862 [2024-07-21 12:18:43.531926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.862 [2024-07-21 12:18:43.586104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:44.862 [2024-07-21 12:18:43.637438] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:45.485 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:45.485 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:36:45.485 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:45.743 [2024-07-21 12:18:44.412933] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:45.743 [2024-07-21 12:18:44.413162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:45.743 [2024-07-21 12:18:44.413283] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:45.743 [2024-07-21 12:18:44.413422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:45.743 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:46.001 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:46.002 "name": "Existed_Raid", 00:36:46.002 "uuid": "e611dbe5-bc61-4460-81ae-14ccfb9ad56b", 00:36:46.002 "strip_size_kb": 0, 00:36:46.002 "state": "configuring", 00:36:46.002 "raid_level": "raid1", 00:36:46.002 "superblock": true, 00:36:46.002 "num_base_bdevs": 2, 00:36:46.002 "num_base_bdevs_discovered": 0, 00:36:46.002 "num_base_bdevs_operational": 2, 00:36:46.002 "base_bdevs_list": [ 00:36:46.002 { 00:36:46.002 "name": "BaseBdev1", 00:36:46.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.002 "is_configured": false, 00:36:46.002 "data_offset": 0, 00:36:46.002 "data_size": 0 00:36:46.002 }, 00:36:46.002 { 00:36:46.002 "name": "BaseBdev2", 00:36:46.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.002 "is_configured": false, 00:36:46.002 "data_offset": 0, 00:36:46.002 "data_size": 0 00:36:46.002 } 00:36:46.002 ] 00:36:46.002 }' 00:36:46.002 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:46.002 12:18:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:46.568 12:18:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:46.826 [2024-07-21 12:18:45.584954] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:46.826 [2024-07-21 12:18:45.585139] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006380 name Existed_Raid, state configuring 00:36:46.826 12:18:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:47.084 [2024-07-21 12:18:45.853020] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:47.084 [2024-07-21 12:18:45.853243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:47.084 [2024-07-21 12:18:45.853439] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:47.084 [2024-07-21 12:18:45.853520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:47.084 12:18:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:36:47.342 [2024-07-21 12:18:46.051640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:47.342 BaseBdev1 00:36:47.342 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:36:47.342 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev1 00:36:47.342 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:36:47.342 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:36:47.342 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:36:47.342 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:36:47.342 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:47.600 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:47.600 [ 00:36:47.600 { 00:36:47.600 "name": "BaseBdev1", 00:36:47.600 "aliases": [ 00:36:47.600 "5a56c195-dc4e-43f9-a5ff-303079f48372" 00:36:47.600 ], 00:36:47.600 "product_name": "Malloc disk", 00:36:47.600 "block_size": 4128, 00:36:47.600 "num_blocks": 8192, 00:36:47.600 "uuid": "5a56c195-dc4e-43f9-a5ff-303079f48372", 00:36:47.600 "md_size": 32, 00:36:47.600 "md_interleave": true, 00:36:47.600 "dif_type": 0, 00:36:47.600 "assigned_rate_limits": { 00:36:47.600 "rw_ios_per_sec": 0, 00:36:47.600 "rw_mbytes_per_sec": 0, 00:36:47.600 "r_mbytes_per_sec": 0, 00:36:47.600 "w_mbytes_per_sec": 0 00:36:47.600 }, 00:36:47.600 "claimed": true, 00:36:47.600 "claim_type": "exclusive_write", 00:36:47.600 "zoned": false, 00:36:47.600 "supported_io_types": { 00:36:47.600 "read": true, 00:36:47.600 "write": true, 00:36:47.600 "unmap": true, 00:36:47.600 "write_zeroes": true, 00:36:47.600 "flush": true, 00:36:47.600 "reset": true, 00:36:47.600 "compare": false, 00:36:47.600 "compare_and_write": false, 00:36:47.600 "abort": true, 00:36:47.600 "nvme_admin": false, 00:36:47.600 "nvme_io": false 00:36:47.600 }, 00:36:47.600 "memory_domains": [ 00:36:47.600 { 00:36:47.600 "dma_device_id": "system", 00:36:47.600 "dma_device_type": 1 00:36:47.600 }, 00:36:47.600 { 00:36:47.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:47.600 "dma_device_type": 2 00:36:47.600 } 00:36:47.601 ], 00:36:47.601 "driver_specific": {} 00:36:47.601 } 00:36:47.601 ] 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:47.601 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:47.859 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:47.859 "name": "Existed_Raid", 00:36:47.859 "uuid": "319982e4-2e91-4543-b44f-ed0d08af469a", 00:36:47.859 "strip_size_kb": 0, 00:36:47.859 "state": "configuring", 00:36:47.859 "raid_level": "raid1", 00:36:47.859 "superblock": true, 00:36:47.859 "num_base_bdevs": 2, 00:36:47.859 "num_base_bdevs_discovered": 1, 00:36:47.859 "num_base_bdevs_operational": 2, 00:36:47.859 "base_bdevs_list": [ 00:36:47.859 { 00:36:47.859 "name": "BaseBdev1", 00:36:47.859 "uuid": "5a56c195-dc4e-43f9-a5ff-303079f48372", 00:36:47.859 "is_configured": true, 00:36:47.859 "data_offset": 256, 00:36:47.859 "data_size": 7936 00:36:47.859 }, 00:36:47.859 { 00:36:47.859 "name": "BaseBdev2", 00:36:47.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.859 "is_configured": false, 00:36:47.859 "data_offset": 0, 00:36:47.859 "data_size": 0 00:36:47.859 } 00:36:47.859 ] 00:36:47.859 }' 00:36:47.859 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:47.859 12:18:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:48.426 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:36:48.683 [2024-07-21 12:18:47.459921] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:48.683 [2024-07-21 12:18:47.460079] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:36:48.683 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:36:48.940 [2024-07-21 12:18:47.643995] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:48.940 [2024-07-21 12:18:47.646008] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:48.940 [2024-07-21 12:18:47.646175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:48.940 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:36:48.940 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:48.941 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:49.199 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:49.199 "name": "Existed_Raid", 00:36:49.199 "uuid": "1459d7d9-35d9-4f4a-afe0-caf60b500a31", 00:36:49.199 "strip_size_kb": 0, 00:36:49.199 "state": "configuring", 00:36:49.199 "raid_level": "raid1", 00:36:49.199 "superblock": true, 00:36:49.199 "num_base_bdevs": 2, 00:36:49.199 "num_base_bdevs_discovered": 1, 00:36:49.199 "num_base_bdevs_operational": 2, 00:36:49.199 "base_bdevs_list": [ 00:36:49.199 { 00:36:49.199 "name": "BaseBdev1", 00:36:49.199 "uuid": "5a56c195-dc4e-43f9-a5ff-303079f48372", 00:36:49.199 "is_configured": true, 00:36:49.199 "data_offset": 256, 00:36:49.199 "data_size": 7936 00:36:49.199 }, 00:36:49.199 { 00:36:49.199 "name": "BaseBdev2", 00:36:49.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.199 "is_configured": false, 00:36:49.199 "data_offset": 0, 00:36:49.199 "data_size": 0 00:36:49.199 } 00:36:49.199 ] 00:36:49.199 }' 00:36:49.199 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:49.199 12:18:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:49.765 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:36:50.022 [2024-07-21 12:18:48.770825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:50.022 [2024-07-21 12:18:48.771186] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000006f80 00:36:50.022 [2024-07-21 12:18:48.771316] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:50.022 [2024-07-21 12:18:48.771501] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:36:50.022 [2024-07-21 12:18:48.771747] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000006f80 00:36:50.022 [2024-07-21 12:18:48.771877] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000006f80 00:36:50.022 [2024-07-21 12:18:48.772051] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:50.022 BaseBdev2 00:36:50.022 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:36:50.022 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@895 -- # local bdev_name=BaseBdev2 00:36:50.022 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:36:50.022 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local i 00:36:50.022 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:36:50.022 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:36:50.022 12:18:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:36:50.279 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:50.537 [ 00:36:50.537 { 00:36:50.537 "name": "BaseBdev2", 00:36:50.537 "aliases": [ 00:36:50.537 "44bac832-9001-4c9e-ba25-1d48f8a77a44" 00:36:50.537 ], 00:36:50.537 "product_name": "Malloc disk", 00:36:50.537 "block_size": 4128, 00:36:50.537 "num_blocks": 8192, 00:36:50.537 "uuid": "44bac832-9001-4c9e-ba25-1d48f8a77a44", 00:36:50.537 "md_size": 32, 00:36:50.537 "md_interleave": true, 00:36:50.537 "dif_type": 0, 00:36:50.537 "assigned_rate_limits": { 00:36:50.537 "rw_ios_per_sec": 0, 00:36:50.537 "rw_mbytes_per_sec": 0, 00:36:50.537 "r_mbytes_per_sec": 0, 00:36:50.537 "w_mbytes_per_sec": 0 00:36:50.537 }, 00:36:50.537 "claimed": true, 00:36:50.537 "claim_type": "exclusive_write", 00:36:50.537 "zoned": false, 00:36:50.537 "supported_io_types": { 00:36:50.537 "read": true, 00:36:50.537 "write": true, 00:36:50.537 "unmap": true, 00:36:50.537 "write_zeroes": true, 00:36:50.537 "flush": true, 00:36:50.537 "reset": true, 00:36:50.537 "compare": false, 00:36:50.537 "compare_and_write": false, 00:36:50.538 "abort": true, 00:36:50.538 "nvme_admin": false, 00:36:50.538 "nvme_io": false 00:36:50.538 }, 00:36:50.538 "memory_domains": [ 00:36:50.538 { 00:36:50.538 "dma_device_id": "system", 00:36:50.538 "dma_device_type": 1 00:36:50.538 }, 00:36:50.538 { 00:36:50.538 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:50.538 "dma_device_type": 2 00:36:50.538 } 00:36:50.538 ], 00:36:50.538 "driver_specific": {} 00:36:50.538 } 00:36:50.538 ] 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # return 0 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:50.538 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:50.795 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:50.795 "name": "Existed_Raid", 00:36:50.795 "uuid": "1459d7d9-35d9-4f4a-afe0-caf60b500a31", 00:36:50.795 "strip_size_kb": 0, 00:36:50.795 "state": "online", 00:36:50.795 "raid_level": "raid1", 00:36:50.795 "superblock": true, 00:36:50.795 "num_base_bdevs": 2, 00:36:50.795 "num_base_bdevs_discovered": 2, 00:36:50.795 "num_base_bdevs_operational": 2, 00:36:50.795 "base_bdevs_list": [ 00:36:50.795 { 00:36:50.795 "name": "BaseBdev1", 00:36:50.795 "uuid": "5a56c195-dc4e-43f9-a5ff-303079f48372", 00:36:50.795 "is_configured": true, 00:36:50.795 "data_offset": 256, 00:36:50.795 "data_size": 7936 00:36:50.795 }, 00:36:50.795 { 00:36:50.795 "name": "BaseBdev2", 00:36:50.795 "uuid": "44bac832-9001-4c9e-ba25-1d48f8a77a44", 00:36:50.795 "is_configured": true, 00:36:50.795 "data_offset": 256, 00:36:50.795 "data_size": 7936 00:36:50.795 } 00:36:50.795 ] 00:36:50.795 }' 00:36:50.795 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:50.796 12:18:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:36:51.360 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:51.618 [2024-07-21 12:18:50.259399] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:51.618 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:51.618 "name": "Existed_Raid", 00:36:51.618 "aliases": [ 00:36:51.618 "1459d7d9-35d9-4f4a-afe0-caf60b500a31" 00:36:51.618 ], 00:36:51.618 "product_name": "Raid Volume", 00:36:51.618 "block_size": 4128, 00:36:51.618 "num_blocks": 7936, 00:36:51.618 "uuid": "1459d7d9-35d9-4f4a-afe0-caf60b500a31", 00:36:51.618 "md_size": 32, 00:36:51.618 "md_interleave": true, 00:36:51.618 "dif_type": 0, 00:36:51.618 "assigned_rate_limits": { 00:36:51.618 "rw_ios_per_sec": 0, 00:36:51.618 "rw_mbytes_per_sec": 0, 00:36:51.618 "r_mbytes_per_sec": 0, 00:36:51.618 "w_mbytes_per_sec": 0 00:36:51.618 }, 00:36:51.618 "claimed": false, 00:36:51.618 "zoned": false, 00:36:51.618 "supported_io_types": { 00:36:51.618 "read": true, 00:36:51.618 "write": true, 00:36:51.618 "unmap": false, 00:36:51.618 "write_zeroes": true, 00:36:51.618 "flush": false, 00:36:51.618 "reset": true, 00:36:51.618 "compare": false, 00:36:51.618 "compare_and_write": false, 00:36:51.618 "abort": false, 00:36:51.618 "nvme_admin": false, 00:36:51.618 "nvme_io": false 00:36:51.618 }, 00:36:51.618 "memory_domains": [ 00:36:51.618 { 00:36:51.618 "dma_device_id": "system", 00:36:51.618 "dma_device_type": 1 00:36:51.618 }, 00:36:51.618 { 00:36:51.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:51.618 "dma_device_type": 2 00:36:51.618 }, 00:36:51.618 { 00:36:51.618 "dma_device_id": "system", 00:36:51.618 "dma_device_type": 1 00:36:51.618 }, 00:36:51.618 { 00:36:51.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:51.618 "dma_device_type": 2 00:36:51.618 } 00:36:51.618 ], 00:36:51.618 "driver_specific": { 00:36:51.618 "raid": { 00:36:51.618 "uuid": "1459d7d9-35d9-4f4a-afe0-caf60b500a31", 00:36:51.618 "strip_size_kb": 0, 00:36:51.618 "state": "online", 00:36:51.618 "raid_level": "raid1", 00:36:51.618 "superblock": true, 00:36:51.618 "num_base_bdevs": 2, 00:36:51.618 "num_base_bdevs_discovered": 2, 00:36:51.618 "num_base_bdevs_operational": 2, 00:36:51.618 "base_bdevs_list": [ 00:36:51.618 { 00:36:51.618 "name": "BaseBdev1", 00:36:51.618 "uuid": "5a56c195-dc4e-43f9-a5ff-303079f48372", 00:36:51.618 "is_configured": true, 00:36:51.618 "data_offset": 256, 00:36:51.618 "data_size": 7936 00:36:51.618 }, 00:36:51.618 { 00:36:51.618 "name": "BaseBdev2", 00:36:51.618 "uuid": "44bac832-9001-4c9e-ba25-1d48f8a77a44", 00:36:51.618 "is_configured": true, 00:36:51.618 "data_offset": 256, 00:36:51.618 "data_size": 7936 00:36:51.618 } 00:36:51.618 ] 00:36:51.618 } 00:36:51.618 } 00:36:51.618 }' 00:36:51.618 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:51.618 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:36:51.618 BaseBdev2' 00:36:51.618 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:51.618 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:36:51.618 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:51.875 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:51.875 "name": "BaseBdev1", 00:36:51.875 "aliases": [ 00:36:51.875 "5a56c195-dc4e-43f9-a5ff-303079f48372" 00:36:51.875 ], 00:36:51.875 "product_name": "Malloc disk", 00:36:51.875 "block_size": 4128, 00:36:51.875 "num_blocks": 8192, 00:36:51.875 "uuid": "5a56c195-dc4e-43f9-a5ff-303079f48372", 00:36:51.875 "md_size": 32, 00:36:51.875 "md_interleave": true, 00:36:51.875 "dif_type": 0, 00:36:51.875 "assigned_rate_limits": { 00:36:51.875 "rw_ios_per_sec": 0, 00:36:51.875 "rw_mbytes_per_sec": 0, 00:36:51.875 "r_mbytes_per_sec": 0, 00:36:51.875 "w_mbytes_per_sec": 0 00:36:51.875 }, 00:36:51.875 "claimed": true, 00:36:51.875 "claim_type": "exclusive_write", 00:36:51.875 "zoned": false, 00:36:51.876 "supported_io_types": { 00:36:51.876 "read": true, 00:36:51.876 "write": true, 00:36:51.876 "unmap": true, 00:36:51.876 "write_zeroes": true, 00:36:51.876 "flush": true, 00:36:51.876 "reset": true, 00:36:51.876 "compare": false, 00:36:51.876 "compare_and_write": false, 00:36:51.876 "abort": true, 00:36:51.876 "nvme_admin": false, 00:36:51.876 "nvme_io": false 00:36:51.876 }, 00:36:51.876 "memory_domains": [ 00:36:51.876 { 00:36:51.876 "dma_device_id": "system", 00:36:51.876 "dma_device_type": 1 00:36:51.876 }, 00:36:51.876 { 00:36:51.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:51.876 "dma_device_type": 2 00:36:51.876 } 00:36:51.876 ], 00:36:51.876 "driver_specific": {} 00:36:51.876 }' 00:36:51.876 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:51.876 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:51.876 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:51.876 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:51.876 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:51.876 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:51.876 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:36:52.133 12:18:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:52.391 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:52.391 "name": "BaseBdev2", 00:36:52.391 "aliases": [ 00:36:52.391 "44bac832-9001-4c9e-ba25-1d48f8a77a44" 00:36:52.391 ], 00:36:52.391 "product_name": "Malloc disk", 00:36:52.391 "block_size": 4128, 00:36:52.391 "num_blocks": 8192, 00:36:52.391 "uuid": "44bac832-9001-4c9e-ba25-1d48f8a77a44", 00:36:52.391 "md_size": 32, 00:36:52.391 "md_interleave": true, 00:36:52.391 "dif_type": 0, 00:36:52.391 "assigned_rate_limits": { 00:36:52.391 "rw_ios_per_sec": 0, 00:36:52.391 "rw_mbytes_per_sec": 0, 00:36:52.391 "r_mbytes_per_sec": 0, 00:36:52.391 "w_mbytes_per_sec": 0 00:36:52.391 }, 00:36:52.391 "claimed": true, 00:36:52.391 "claim_type": "exclusive_write", 00:36:52.391 "zoned": false, 00:36:52.391 "supported_io_types": { 00:36:52.391 "read": true, 00:36:52.391 "write": true, 00:36:52.391 "unmap": true, 00:36:52.391 "write_zeroes": true, 00:36:52.391 "flush": true, 00:36:52.391 "reset": true, 00:36:52.391 "compare": false, 00:36:52.391 "compare_and_write": false, 00:36:52.391 "abort": true, 00:36:52.391 "nvme_admin": false, 00:36:52.391 "nvme_io": false 00:36:52.391 }, 00:36:52.391 "memory_domains": [ 00:36:52.391 { 00:36:52.391 "dma_device_id": "system", 00:36:52.391 "dma_device_type": 1 00:36:52.391 }, 00:36:52.391 { 00:36:52.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.391 "dma_device_type": 2 00:36:52.391 } 00:36:52.391 ], 00:36:52.391 "driver_specific": {} 00:36:52.391 }' 00:36:52.391 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:52.391 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:52.649 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.906 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:52.906 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:52.906 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:36:52.906 [2024-07-21 12:18:51.755508] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:53.163 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:36:53.163 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:36:53.163 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:36:53.163 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:36:53.163 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.164 12:18:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:53.422 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:53.422 "name": "Existed_Raid", 00:36:53.422 "uuid": "1459d7d9-35d9-4f4a-afe0-caf60b500a31", 00:36:53.422 "strip_size_kb": 0, 00:36:53.422 "state": "online", 00:36:53.422 "raid_level": "raid1", 00:36:53.422 "superblock": true, 00:36:53.422 "num_base_bdevs": 2, 00:36:53.422 "num_base_bdevs_discovered": 1, 00:36:53.422 "num_base_bdevs_operational": 1, 00:36:53.422 "base_bdevs_list": [ 00:36:53.422 { 00:36:53.422 "name": null, 00:36:53.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:53.422 "is_configured": false, 00:36:53.422 "data_offset": 256, 00:36:53.422 "data_size": 7936 00:36:53.422 }, 00:36:53.422 { 00:36:53.422 "name": "BaseBdev2", 00:36:53.422 "uuid": "44bac832-9001-4c9e-ba25-1d48f8a77a44", 00:36:53.422 "is_configured": true, 00:36:53.422 "data_offset": 256, 00:36:53.422 "data_size": 7936 00:36:53.422 } 00:36:53.422 ] 00:36:53.422 }' 00:36:53.422 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:53.422 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:53.989 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:36:53.989 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:53.989 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:53.989 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:36:54.248 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:36:54.248 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:54.248 12:18:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:36:54.506 [2024-07-21 12:18:53.130246] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:54.506 [2024-07-21 12:18:53.130498] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:54.506 [2024-07-21 12:18:53.140863] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:54.506 [2024-07-21 12:18:53.141060] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:54.506 [2024-07-21 12:18:53.141163] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006f80 name Existed_Raid, state offline 00:36:54.506 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:36:54.506 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:36:54.506 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:54.506 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 172383 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 172383 ']' 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 172383 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 172383 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 172383' 00:36:54.764 killing process with pid 172383 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 172383 00:36:54.764 [2024-07-21 12:18:53.443103] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:54.764 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 172383 00:36:54.764 [2024-07-21 12:18:53.443331] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:55.021 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:36:55.021 00:36:55.021 real 0m10.373s 00:36:55.021 user 0m19.104s 00:36:55.021 sys 0m1.319s 00:36:55.021 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:36:55.021 12:18:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:55.021 ************************************ 00:36:55.022 END TEST raid_state_function_test_sb_md_interleaved 00:36:55.022 ************************************ 00:36:55.022 12:18:53 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:36:55.022 12:18:53 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:36:55.022 12:18:53 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:36:55.022 12:18:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:55.022 ************************************ 00:36:55.022 START TEST raid_superblock_test_md_interleaved 00:36:55.022 ************************************ 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1121 -- # raid_superblock_test raid1 2 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=172742 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 172742 /var/tmp/spdk-raid.sock 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 172742 ']' 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:36:55.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:36:55.022 12:18:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:55.022 [2024-07-21 12:18:53.777784] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:36:55.022 [2024-07-21 12:18:53.778201] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid172742 ] 00:36:55.279 [2024-07-21 12:18:53.927642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:55.279 [2024-07-21 12:18:53.992276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:36:55.279 [2024-07-21 12:18:54.043920] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:55.279 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:36:55.536 malloc1 00:36:55.536 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:55.794 [2024-07-21 12:18:54.621799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:55.794 [2024-07-21 12:18:54.622164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:55.794 [2024-07-21 12:18:54.622253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:36:55.794 [2024-07-21 12:18:54.622539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:55.794 [2024-07-21 12:18:54.624693] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:55.794 [2024-07-21 12:18:54.624869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:55.794 pt1 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:55.794 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:36:56.052 malloc2 00:36:56.052 12:18:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:56.310 [2024-07-21 12:18:55.069285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:56.310 [2024-07-21 12:18:55.069523] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:56.310 [2024-07-21 12:18:55.069737] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:36:56.310 [2024-07-21 12:18:55.069921] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:56.310 [2024-07-21 12:18:55.072016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:56.310 [2024-07-21 12:18:55.072195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:56.310 pt2 00:36:56.310 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:36:56.310 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:36:56.310 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:36:56.567 [2024-07-21 12:18:55.317303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:56.567 [2024-07-21 12:18:55.319360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:56.567 [2024-07-21 12:18:55.319695] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007b80 00:36:56.567 [2024-07-21 12:18:55.319822] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:36:56.567 [2024-07-21 12:18:55.320048] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:36:56.567 [2024-07-21 12:18:55.320259] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007b80 00:36:56.567 [2024-07-21 12:18:55.320373] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007b80 00:36:56.567 [2024-07-21 12:18:55.320540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:56.567 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:56.567 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:36:56.567 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:36:56.567 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:56.568 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:56.826 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:36:56.826 "name": "raid_bdev1", 00:36:56.826 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:36:56.826 "strip_size_kb": 0, 00:36:56.826 "state": "online", 00:36:56.826 "raid_level": "raid1", 00:36:56.826 "superblock": true, 00:36:56.826 "num_base_bdevs": 2, 00:36:56.826 "num_base_bdevs_discovered": 2, 00:36:56.826 "num_base_bdevs_operational": 2, 00:36:56.826 "base_bdevs_list": [ 00:36:56.826 { 00:36:56.826 "name": "pt1", 00:36:56.826 "uuid": "64a767ab-552f-5611-ad99-228640826996", 00:36:56.826 "is_configured": true, 00:36:56.826 "data_offset": 256, 00:36:56.826 "data_size": 7936 00:36:56.826 }, 00:36:56.826 { 00:36:56.826 "name": "pt2", 00:36:56.826 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:36:56.826 "is_configured": true, 00:36:56.826 "data_offset": 256, 00:36:56.826 "data_size": 7936 00:36:56.826 } 00:36:56.826 ] 00:36:56.826 }' 00:36:56.826 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:36:56.826 12:18:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:36:57.392 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:36:57.392 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:36:57.392 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:36:57.392 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:36:57.392 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:36:57.392 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:36:57.393 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:57.393 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:36:57.651 [2024-07-21 12:18:56.309604] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:57.651 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:36:57.651 "name": "raid_bdev1", 00:36:57.651 "aliases": [ 00:36:57.651 "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7" 00:36:57.651 ], 00:36:57.651 "product_name": "Raid Volume", 00:36:57.651 "block_size": 4128, 00:36:57.651 "num_blocks": 7936, 00:36:57.651 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:36:57.651 "md_size": 32, 00:36:57.651 "md_interleave": true, 00:36:57.651 "dif_type": 0, 00:36:57.651 "assigned_rate_limits": { 00:36:57.651 "rw_ios_per_sec": 0, 00:36:57.651 "rw_mbytes_per_sec": 0, 00:36:57.651 "r_mbytes_per_sec": 0, 00:36:57.651 "w_mbytes_per_sec": 0 00:36:57.651 }, 00:36:57.651 "claimed": false, 00:36:57.651 "zoned": false, 00:36:57.651 "supported_io_types": { 00:36:57.651 "read": true, 00:36:57.651 "write": true, 00:36:57.651 "unmap": false, 00:36:57.651 "write_zeroes": true, 00:36:57.651 "flush": false, 00:36:57.651 "reset": true, 00:36:57.651 "compare": false, 00:36:57.651 "compare_and_write": false, 00:36:57.651 "abort": false, 00:36:57.651 "nvme_admin": false, 00:36:57.651 "nvme_io": false 00:36:57.651 }, 00:36:57.651 "memory_domains": [ 00:36:57.651 { 00:36:57.651 "dma_device_id": "system", 00:36:57.651 "dma_device_type": 1 00:36:57.651 }, 00:36:57.651 { 00:36:57.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.651 "dma_device_type": 2 00:36:57.651 }, 00:36:57.651 { 00:36:57.651 "dma_device_id": "system", 00:36:57.651 "dma_device_type": 1 00:36:57.651 }, 00:36:57.651 { 00:36:57.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.651 "dma_device_type": 2 00:36:57.651 } 00:36:57.651 ], 00:36:57.651 "driver_specific": { 00:36:57.651 "raid": { 00:36:57.651 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:36:57.651 "strip_size_kb": 0, 00:36:57.651 "state": "online", 00:36:57.651 "raid_level": "raid1", 00:36:57.651 "superblock": true, 00:36:57.651 "num_base_bdevs": 2, 00:36:57.651 "num_base_bdevs_discovered": 2, 00:36:57.651 "num_base_bdevs_operational": 2, 00:36:57.651 "base_bdevs_list": [ 00:36:57.651 { 00:36:57.651 "name": "pt1", 00:36:57.651 "uuid": "64a767ab-552f-5611-ad99-228640826996", 00:36:57.651 "is_configured": true, 00:36:57.651 "data_offset": 256, 00:36:57.651 "data_size": 7936 00:36:57.651 }, 00:36:57.651 { 00:36:57.651 "name": "pt2", 00:36:57.651 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:36:57.651 "is_configured": true, 00:36:57.651 "data_offset": 256, 00:36:57.651 "data_size": 7936 00:36:57.651 } 00:36:57.651 ] 00:36:57.651 } 00:36:57.651 } 00:36:57.651 }' 00:36:57.651 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:57.651 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:36:57.651 pt2' 00:36:57.651 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:57.651 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:36:57.651 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:57.909 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:57.909 "name": "pt1", 00:36:57.909 "aliases": [ 00:36:57.909 "64a767ab-552f-5611-ad99-228640826996" 00:36:57.909 ], 00:36:57.909 "product_name": "passthru", 00:36:57.909 "block_size": 4128, 00:36:57.909 "num_blocks": 8192, 00:36:57.909 "uuid": "64a767ab-552f-5611-ad99-228640826996", 00:36:57.909 "md_size": 32, 00:36:57.909 "md_interleave": true, 00:36:57.909 "dif_type": 0, 00:36:57.910 "assigned_rate_limits": { 00:36:57.910 "rw_ios_per_sec": 0, 00:36:57.910 "rw_mbytes_per_sec": 0, 00:36:57.910 "r_mbytes_per_sec": 0, 00:36:57.910 "w_mbytes_per_sec": 0 00:36:57.910 }, 00:36:57.910 "claimed": true, 00:36:57.910 "claim_type": "exclusive_write", 00:36:57.910 "zoned": false, 00:36:57.910 "supported_io_types": { 00:36:57.910 "read": true, 00:36:57.910 "write": true, 00:36:57.910 "unmap": true, 00:36:57.910 "write_zeroes": true, 00:36:57.910 "flush": true, 00:36:57.910 "reset": true, 00:36:57.910 "compare": false, 00:36:57.910 "compare_and_write": false, 00:36:57.910 "abort": true, 00:36:57.910 "nvme_admin": false, 00:36:57.910 "nvme_io": false 00:36:57.910 }, 00:36:57.910 "memory_domains": [ 00:36:57.910 { 00:36:57.910 "dma_device_id": "system", 00:36:57.910 "dma_device_type": 1 00:36:57.910 }, 00:36:57.910 { 00:36:57.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:57.910 "dma_device_type": 2 00:36:57.910 } 00:36:57.910 ], 00:36:57.910 "driver_specific": { 00:36:57.910 "passthru": { 00:36:57.910 "name": "pt1", 00:36:57.910 "base_bdev_name": "malloc1" 00:36:57.910 } 00:36:57.910 } 00:36:57.910 }' 00:36:57.910 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:57.910 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:57.910 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:57.910 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:57.910 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:57.910 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:57.910 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:36:58.168 12:18:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:36:58.426 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:36:58.426 "name": "pt2", 00:36:58.426 "aliases": [ 00:36:58.426 "f8b2fcb8-3a7c-5720-9a27-919c540672f9" 00:36:58.426 ], 00:36:58.426 "product_name": "passthru", 00:36:58.426 "block_size": 4128, 00:36:58.426 "num_blocks": 8192, 00:36:58.426 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:36:58.426 "md_size": 32, 00:36:58.426 "md_interleave": true, 00:36:58.426 "dif_type": 0, 00:36:58.426 "assigned_rate_limits": { 00:36:58.426 "rw_ios_per_sec": 0, 00:36:58.426 "rw_mbytes_per_sec": 0, 00:36:58.426 "r_mbytes_per_sec": 0, 00:36:58.426 "w_mbytes_per_sec": 0 00:36:58.426 }, 00:36:58.426 "claimed": true, 00:36:58.426 "claim_type": "exclusive_write", 00:36:58.426 "zoned": false, 00:36:58.426 "supported_io_types": { 00:36:58.426 "read": true, 00:36:58.426 "write": true, 00:36:58.426 "unmap": true, 00:36:58.426 "write_zeroes": true, 00:36:58.426 "flush": true, 00:36:58.426 "reset": true, 00:36:58.426 "compare": false, 00:36:58.426 "compare_and_write": false, 00:36:58.426 "abort": true, 00:36:58.426 "nvme_admin": false, 00:36:58.426 "nvme_io": false 00:36:58.426 }, 00:36:58.426 "memory_domains": [ 00:36:58.426 { 00:36:58.426 "dma_device_id": "system", 00:36:58.426 "dma_device_type": 1 00:36:58.426 }, 00:36:58.426 { 00:36:58.426 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:58.426 "dma_device_type": 2 00:36:58.426 } 00:36:58.426 ], 00:36:58.426 "driver_specific": { 00:36:58.426 "passthru": { 00:36:58.426 "name": "pt2", 00:36:58.426 "base_bdev_name": "malloc2" 00:36:58.426 } 00:36:58.426 } 00:36:58.426 }' 00:36:58.426 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:58.426 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:36:58.684 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:36:58.943 [2024-07-21 12:18:57.729863] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:58.943 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7 00:36:58.943 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7 ']' 00:36:58.943 12:18:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:36:59.200 [2024-07-21 12:18:58.009749] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:59.200 [2024-07-21 12:18:58.009923] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:59.200 [2024-07-21 12:18:58.010132] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:59.200 [2024-07-21 12:18:58.010313] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:59.200 [2024-07-21 12:18:58.010420] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007b80 name raid_bdev1, state offline 00:36:59.200 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:36:59.200 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:36:59.458 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:36:59.458 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:36:59.458 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:59.458 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:36:59.716 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:36:59.716 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:36:59.973 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:36:59.973 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:00.231 12:18:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:37:00.490 [2024-07-21 12:18:59.184408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:00.490 [2024-07-21 12:18:59.186556] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:00.490 [2024-07-21 12:18:59.186828] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:00.490 [2024-07-21 12:18:59.187047] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:00.490 [2024-07-21 12:18:59.187189] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:00.490 [2024-07-21 12:18:59.187306] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008180 name raid_bdev1, state configuring 00:37:00.490 request: 00:37:00.490 { 00:37:00.490 "name": "raid_bdev1", 00:37:00.490 "raid_level": "raid1", 00:37:00.490 "base_bdevs": [ 00:37:00.490 "malloc1", 00:37:00.490 "malloc2" 00:37:00.490 ], 00:37:00.490 "superblock": false, 00:37:00.490 "method": "bdev_raid_create", 00:37:00.490 "req_id": 1 00:37:00.490 } 00:37:00.490 Got JSON-RPC error response 00:37:00.490 response: 00:37:00.490 { 00:37:00.490 "code": -17, 00:37:00.490 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:00.490 } 00:37:00.490 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:37:00.490 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:00.490 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:00.490 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:00.490 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:37:00.490 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:00.748 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:37:00.748 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:37:00.748 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:01.006 [2024-07-21 12:18:59.720420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:01.006 [2024-07-21 12:18:59.720704] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:01.006 [2024-07-21 12:18:59.720872] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:37:01.006 [2024-07-21 12:18:59.721001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:01.006 [2024-07-21 12:18:59.723275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:01.006 [2024-07-21 12:18:59.723446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:01.006 [2024-07-21 12:18:59.723638] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:01.006 [2024-07-21 12:18:59.723745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:01.006 pt1 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:01.006 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.265 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:01.265 "name": "raid_bdev1", 00:37:01.265 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:37:01.265 "strip_size_kb": 0, 00:37:01.265 "state": "configuring", 00:37:01.265 "raid_level": "raid1", 00:37:01.265 "superblock": true, 00:37:01.265 "num_base_bdevs": 2, 00:37:01.265 "num_base_bdevs_discovered": 1, 00:37:01.265 "num_base_bdevs_operational": 2, 00:37:01.265 "base_bdevs_list": [ 00:37:01.265 { 00:37:01.265 "name": "pt1", 00:37:01.265 "uuid": "64a767ab-552f-5611-ad99-228640826996", 00:37:01.265 "is_configured": true, 00:37:01.265 "data_offset": 256, 00:37:01.265 "data_size": 7936 00:37:01.265 }, 00:37:01.265 { 00:37:01.265 "name": null, 00:37:01.265 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:37:01.265 "is_configured": false, 00:37:01.265 "data_offset": 256, 00:37:01.265 "data_size": 7936 00:37:01.265 } 00:37:01.265 ] 00:37:01.265 }' 00:37:01.265 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:01.265 12:18:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:01.832 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:37:01.832 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:37:01.832 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:01.832 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:02.090 [2024-07-21 12:19:00.770662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:02.090 [2024-07-21 12:19:00.771826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:02.090 [2024-07-21 12:19:00.772177] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:02.090 [2024-07-21 12:19:00.772481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:02.090 [2024-07-21 12:19:00.773080] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:02.090 [2024-07-21 12:19:00.773427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:02.090 [2024-07-21 12:19:00.773861] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:02.090 [2024-07-21 12:19:00.774134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:02.090 [2024-07-21 12:19:00.774666] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:37:02.090 [2024-07-21 12:19:00.774917] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:02.090 [2024-07-21 12:19:00.775312] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:37:02.090 [2024-07-21 12:19:00.775706] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:37:02.090 [2024-07-21 12:19:00.775944] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:37:02.090 [2024-07-21 12:19:00.776355] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:02.090 pt2 00:37:02.090 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:37:02.090 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:02.091 12:19:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.349 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:02.349 "name": "raid_bdev1", 00:37:02.349 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:37:02.349 "strip_size_kb": 0, 00:37:02.349 "state": "online", 00:37:02.349 "raid_level": "raid1", 00:37:02.349 "superblock": true, 00:37:02.349 "num_base_bdevs": 2, 00:37:02.349 "num_base_bdevs_discovered": 2, 00:37:02.349 "num_base_bdevs_operational": 2, 00:37:02.349 "base_bdevs_list": [ 00:37:02.349 { 00:37:02.349 "name": "pt1", 00:37:02.349 "uuid": "64a767ab-552f-5611-ad99-228640826996", 00:37:02.349 "is_configured": true, 00:37:02.349 "data_offset": 256, 00:37:02.349 "data_size": 7936 00:37:02.349 }, 00:37:02.349 { 00:37:02.349 "name": "pt2", 00:37:02.349 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:37:02.349 "is_configured": true, 00:37:02.349 "data_offset": 256, 00:37:02.349 "data_size": 7936 00:37:02.349 } 00:37:02.349 ] 00:37:02.349 }' 00:37:02.349 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:02.349 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:02.915 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:37:03.173 [2024-07-21 12:19:01.911189] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:03.173 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:37:03.173 "name": "raid_bdev1", 00:37:03.173 "aliases": [ 00:37:03.173 "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7" 00:37:03.173 ], 00:37:03.173 "product_name": "Raid Volume", 00:37:03.173 "block_size": 4128, 00:37:03.173 "num_blocks": 7936, 00:37:03.173 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:37:03.173 "md_size": 32, 00:37:03.173 "md_interleave": true, 00:37:03.173 "dif_type": 0, 00:37:03.173 "assigned_rate_limits": { 00:37:03.173 "rw_ios_per_sec": 0, 00:37:03.173 "rw_mbytes_per_sec": 0, 00:37:03.173 "r_mbytes_per_sec": 0, 00:37:03.173 "w_mbytes_per_sec": 0 00:37:03.173 }, 00:37:03.173 "claimed": false, 00:37:03.173 "zoned": false, 00:37:03.173 "supported_io_types": { 00:37:03.173 "read": true, 00:37:03.173 "write": true, 00:37:03.173 "unmap": false, 00:37:03.173 "write_zeroes": true, 00:37:03.173 "flush": false, 00:37:03.173 "reset": true, 00:37:03.173 "compare": false, 00:37:03.173 "compare_and_write": false, 00:37:03.173 "abort": false, 00:37:03.173 "nvme_admin": false, 00:37:03.173 "nvme_io": false 00:37:03.173 }, 00:37:03.173 "memory_domains": [ 00:37:03.173 { 00:37:03.173 "dma_device_id": "system", 00:37:03.173 "dma_device_type": 1 00:37:03.173 }, 00:37:03.173 { 00:37:03.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.173 "dma_device_type": 2 00:37:03.173 }, 00:37:03.173 { 00:37:03.173 "dma_device_id": "system", 00:37:03.173 "dma_device_type": 1 00:37:03.173 }, 00:37:03.173 { 00:37:03.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.173 "dma_device_type": 2 00:37:03.173 } 00:37:03.173 ], 00:37:03.173 "driver_specific": { 00:37:03.173 "raid": { 00:37:03.173 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:37:03.173 "strip_size_kb": 0, 00:37:03.173 "state": "online", 00:37:03.173 "raid_level": "raid1", 00:37:03.173 "superblock": true, 00:37:03.173 "num_base_bdevs": 2, 00:37:03.173 "num_base_bdevs_discovered": 2, 00:37:03.173 "num_base_bdevs_operational": 2, 00:37:03.173 "base_bdevs_list": [ 00:37:03.173 { 00:37:03.173 "name": "pt1", 00:37:03.173 "uuid": "64a767ab-552f-5611-ad99-228640826996", 00:37:03.173 "is_configured": true, 00:37:03.173 "data_offset": 256, 00:37:03.173 "data_size": 7936 00:37:03.173 }, 00:37:03.173 { 00:37:03.173 "name": "pt2", 00:37:03.173 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:37:03.173 "is_configured": true, 00:37:03.173 "data_offset": 256, 00:37:03.173 "data_size": 7936 00:37:03.173 } 00:37:03.173 ] 00:37:03.173 } 00:37:03.173 } 00:37:03.173 }' 00:37:03.173 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:03.173 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:37:03.173 pt2' 00:37:03.173 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:03.173 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:37:03.173 12:19:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:03.431 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:03.431 "name": "pt1", 00:37:03.431 "aliases": [ 00:37:03.431 "64a767ab-552f-5611-ad99-228640826996" 00:37:03.431 ], 00:37:03.431 "product_name": "passthru", 00:37:03.431 "block_size": 4128, 00:37:03.431 "num_blocks": 8192, 00:37:03.431 "uuid": "64a767ab-552f-5611-ad99-228640826996", 00:37:03.431 "md_size": 32, 00:37:03.431 "md_interleave": true, 00:37:03.431 "dif_type": 0, 00:37:03.431 "assigned_rate_limits": { 00:37:03.431 "rw_ios_per_sec": 0, 00:37:03.431 "rw_mbytes_per_sec": 0, 00:37:03.431 "r_mbytes_per_sec": 0, 00:37:03.432 "w_mbytes_per_sec": 0 00:37:03.432 }, 00:37:03.432 "claimed": true, 00:37:03.432 "claim_type": "exclusive_write", 00:37:03.432 "zoned": false, 00:37:03.432 "supported_io_types": { 00:37:03.432 "read": true, 00:37:03.432 "write": true, 00:37:03.432 "unmap": true, 00:37:03.432 "write_zeroes": true, 00:37:03.432 "flush": true, 00:37:03.432 "reset": true, 00:37:03.432 "compare": false, 00:37:03.432 "compare_and_write": false, 00:37:03.432 "abort": true, 00:37:03.432 "nvme_admin": false, 00:37:03.432 "nvme_io": false 00:37:03.432 }, 00:37:03.432 "memory_domains": [ 00:37:03.432 { 00:37:03.432 "dma_device_id": "system", 00:37:03.432 "dma_device_type": 1 00:37:03.432 }, 00:37:03.432 { 00:37:03.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:03.432 "dma_device_type": 2 00:37:03.432 } 00:37:03.432 ], 00:37:03.432 "driver_specific": { 00:37:03.432 "passthru": { 00:37:03.432 "name": "pt1", 00:37:03.432 "base_bdev_name": "malloc1" 00:37:03.432 } 00:37:03.432 } 00:37:03.432 }' 00:37:03.432 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:03.432 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:03.432 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:03.432 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:03.690 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:03.690 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:03.690 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:03.690 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:03.690 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:03.690 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:03.690 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:03.949 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:03.949 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:37:03.949 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:37:03.949 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:37:04.208 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:37:04.208 "name": "pt2", 00:37:04.208 "aliases": [ 00:37:04.208 "f8b2fcb8-3a7c-5720-9a27-919c540672f9" 00:37:04.208 ], 00:37:04.208 "product_name": "passthru", 00:37:04.208 "block_size": 4128, 00:37:04.208 "num_blocks": 8192, 00:37:04.208 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:37:04.208 "md_size": 32, 00:37:04.208 "md_interleave": true, 00:37:04.208 "dif_type": 0, 00:37:04.208 "assigned_rate_limits": { 00:37:04.208 "rw_ios_per_sec": 0, 00:37:04.208 "rw_mbytes_per_sec": 0, 00:37:04.208 "r_mbytes_per_sec": 0, 00:37:04.208 "w_mbytes_per_sec": 0 00:37:04.208 }, 00:37:04.208 "claimed": true, 00:37:04.208 "claim_type": "exclusive_write", 00:37:04.208 "zoned": false, 00:37:04.208 "supported_io_types": { 00:37:04.208 "read": true, 00:37:04.208 "write": true, 00:37:04.208 "unmap": true, 00:37:04.208 "write_zeroes": true, 00:37:04.208 "flush": true, 00:37:04.208 "reset": true, 00:37:04.208 "compare": false, 00:37:04.208 "compare_and_write": false, 00:37:04.208 "abort": true, 00:37:04.208 "nvme_admin": false, 00:37:04.208 "nvme_io": false 00:37:04.208 }, 00:37:04.208 "memory_domains": [ 00:37:04.208 { 00:37:04.208 "dma_device_id": "system", 00:37:04.208 "dma_device_type": 1 00:37:04.208 }, 00:37:04.208 { 00:37:04.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:04.208 "dma_device_type": 2 00:37:04.208 } 00:37:04.208 ], 00:37:04.208 "driver_specific": { 00:37:04.208 "passthru": { 00:37:04.208 "name": "pt2", 00:37:04.208 "base_bdev_name": "malloc2" 00:37:04.208 } 00:37:04.208 } 00:37:04.208 }' 00:37:04.208 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.208 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:37:04.208 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:37:04.208 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.208 12:19:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:37:04.208 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:37:04.208 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.467 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:37:04.467 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:37:04.467 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:04.467 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:37:04.467 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:37:04.467 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:04.467 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:37:04.725 [2024-07-21 12:19:03.491458] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:04.725 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7 '!=' b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7 ']' 00:37:04.725 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:37:04.725 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:37:04.725 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:37:04.725 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:37:04.983 [2024-07-21 12:19:03.755383] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:04.983 12:19:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.242 12:19:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:05.242 "name": "raid_bdev1", 00:37:05.242 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:37:05.242 "strip_size_kb": 0, 00:37:05.242 "state": "online", 00:37:05.242 "raid_level": "raid1", 00:37:05.242 "superblock": true, 00:37:05.242 "num_base_bdevs": 2, 00:37:05.242 "num_base_bdevs_discovered": 1, 00:37:05.242 "num_base_bdevs_operational": 1, 00:37:05.242 "base_bdevs_list": [ 00:37:05.242 { 00:37:05.242 "name": null, 00:37:05.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:05.242 "is_configured": false, 00:37:05.242 "data_offset": 256, 00:37:05.242 "data_size": 7936 00:37:05.242 }, 00:37:05.242 { 00:37:05.242 "name": "pt2", 00:37:05.242 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:37:05.242 "is_configured": true, 00:37:05.242 "data_offset": 256, 00:37:05.242 "data_size": 7936 00:37:05.242 } 00:37:05.242 ] 00:37:05.242 }' 00:37:05.242 12:19:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:05.242 12:19:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:06.176 12:19:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:06.176 [2024-07-21 12:19:04.939542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:06.176 [2024-07-21 12:19:04.939693] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:06.176 [2024-07-21 12:19:04.939866] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:06.176 [2024-07-21 12:19:04.940024] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:06.176 [2024-07-21 12:19:04.940128] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:37:06.176 12:19:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.176 12:19:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:37:06.434 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:37:06.434 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:37:06.434 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:37:06.434 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:06.434 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:06.692 [2024-07-21 12:19:05.522737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:06.692 [2024-07-21 12:19:05.523197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:06.692 [2024-07-21 12:19:05.523486] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:06.692 [2024-07-21 12:19:05.523734] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:06.692 [2024-07-21 12:19:05.525704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:06.692 [2024-07-21 12:19:05.525970] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:06.692 [2024-07-21 12:19:05.526247] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:06.692 [2024-07-21 12:19:05.526399] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:06.692 [2024-07-21 12:19:05.526587] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009980 00:37:06.692 [2024-07-21 12:19:05.526712] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:06.692 [2024-07-21 12:19:05.526820] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:06.692 [2024-07-21 12:19:05.527053] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009980 00:37:06.692 [2024-07-21 12:19:05.527176] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009980 00:37:06.692 [2024-07-21 12:19:05.527363] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:06.692 pt2 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:06.692 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:06.950 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:06.950 "name": "raid_bdev1", 00:37:06.950 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:37:06.950 "strip_size_kb": 0, 00:37:06.950 "state": "online", 00:37:06.950 "raid_level": "raid1", 00:37:06.950 "superblock": true, 00:37:06.950 "num_base_bdevs": 2, 00:37:06.950 "num_base_bdevs_discovered": 1, 00:37:06.950 "num_base_bdevs_operational": 1, 00:37:06.950 "base_bdevs_list": [ 00:37:06.950 { 00:37:06.950 "name": null, 00:37:06.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:06.950 "is_configured": false, 00:37:06.950 "data_offset": 256, 00:37:06.950 "data_size": 7936 00:37:06.950 }, 00:37:06.950 { 00:37:06.950 "name": "pt2", 00:37:06.950 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:37:06.950 "is_configured": true, 00:37:06.950 "data_offset": 256, 00:37:06.950 "data_size": 7936 00:37:06.950 } 00:37:06.950 ] 00:37:06.950 }' 00:37:06.950 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:06.950 12:19:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:07.883 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:07.883 [2024-07-21 12:19:06.571522] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:07.883 [2024-07-21 12:19:06.571670] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:07.883 [2024-07-21 12:19:06.571828] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:07.883 [2024-07-21 12:19:06.571974] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:07.883 [2024-07-21 12:19:06.572077] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009980 name raid_bdev1, state offline 00:37:07.883 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:07.883 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:08.141 [2024-07-21 12:19:06.947598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:08.141 [2024-07-21 12:19:06.948142] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:08.141 [2024-07-21 12:19:06.948413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:37:08.141 [2024-07-21 12:19:06.948660] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:08.141 [2024-07-21 12:19:06.950628] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:08.141 [2024-07-21 12:19:06.950894] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:08.141 [2024-07-21 12:19:06.951168] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:08.141 [2024-07-21 12:19:06.951309] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:08.141 [2024-07-21 12:19:06.951559] bdev_raid.c:3547:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:08.141 [2024-07-21 12:19:06.951669] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:08.141 [2024-07-21 12:19:06.951725] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a280 name raid_bdev1, state configuring 00:37:08.141 [2024-07-21 12:19:06.951974] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:08.141 [2024-07-21 12:19:06.952195] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a580 00:37:08.141 [2024-07-21 12:19:06.952289] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:08.141 [2024-07-21 12:19:06.952509] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:08.141 [2024-07-21 12:19:06.952666] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a580 00:37:08.141 [2024-07-21 12:19:06.952758] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a580 00:37:08.141 pt1 00:37:08.141 [2024-07-21 12:19:06.952896] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:08.141 12:19:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.444 12:19:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:08.444 "name": "raid_bdev1", 00:37:08.444 "uuid": "b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7", 00:37:08.444 "strip_size_kb": 0, 00:37:08.444 "state": "online", 00:37:08.444 "raid_level": "raid1", 00:37:08.444 "superblock": true, 00:37:08.444 "num_base_bdevs": 2, 00:37:08.444 "num_base_bdevs_discovered": 1, 00:37:08.444 "num_base_bdevs_operational": 1, 00:37:08.444 "base_bdevs_list": [ 00:37:08.444 { 00:37:08.444 "name": null, 00:37:08.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.444 "is_configured": false, 00:37:08.444 "data_offset": 256, 00:37:08.444 "data_size": 7936 00:37:08.444 }, 00:37:08.444 { 00:37:08.444 "name": "pt2", 00:37:08.444 "uuid": "f8b2fcb8-3a7c-5720-9a27-919c540672f9", 00:37:08.444 "is_configured": true, 00:37:08.444 "data_offset": 256, 00:37:08.444 "data_size": 7936 00:37:08.444 } 00:37:08.444 ] 00:37:08.444 }' 00:37:08.444 12:19:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:08.444 12:19:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:09.031 12:19:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:37:09.031 12:19:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:09.289 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:37:09.289 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:37:09.289 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:09.549 [2024-07-21 12:19:08.287117] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7 '!=' b1c8d2e3-5ea8-49b9-8a11-c2473482a1e7 ']' 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 172742 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 172742 ']' 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 172742 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 172742 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 172742' 00:37:09.549 killing process with pid 172742 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@965 -- # kill 172742 00:37:09.549 [2024-07-21 12:19:08.329104] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:09.549 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # wait 172742 00:37:09.549 [2024-07-21 12:19:08.329287] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:09.549 [2024-07-21 12:19:08.329350] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:09.549 [2024-07-21 12:19:08.329363] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state offline 00:37:09.549 [2024-07-21 12:19:08.355048] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:09.809 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:37:09.809 00:37:09.809 real 0m14.917s 00:37:09.809 user 0m28.500s 00:37:09.809 sys 0m1.710s 00:37:09.809 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:09.809 12:19:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:09.809 ************************************ 00:37:09.809 END TEST raid_superblock_test_md_interleaved 00:37:09.809 ************************************ 00:37:10.068 12:19:08 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:37:10.068 12:19:08 bdev_raid -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:37:10.068 12:19:08 bdev_raid -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:10.068 12:19:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:10.068 ************************************ 00:37:10.068 START TEST raid_rebuild_test_sb_md_interleaved 00:37:10.068 ************************************ 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1121 -- # raid_rebuild_test raid1 2 true false false 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev1 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # echo BaseBdev2 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=173249 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 173249 /var/tmp/spdk-raid.sock 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@827 -- # '[' -z 173249 ']' 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:37:10.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:10.068 12:19:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:10.068 [2024-07-21 12:19:08.771610] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:10.068 [2024-07-21 12:19:08.772224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid173249 ] 00:37:10.068 I/O size of 3145728 is greater than zero copy threshold (65536). 00:37:10.068 Zero copy mechanism will not be used. 00:37:10.327 [2024-07-21 12:19:08.942772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:10.327 [2024-07-21 12:19:09.011318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:10.327 [2024-07-21 12:19:09.077476] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:10.894 12:19:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:10.894 12:19:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # return 0 00:37:10.894 12:19:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:10.894 12:19:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:37:11.153 BaseBdev1_malloc 00:37:11.153 12:19:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:11.412 [2024-07-21 12:19:10.171274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:11.412 [2024-07-21 12:19:10.171589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:11.412 [2024-07-21 12:19:10.171815] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:37:11.412 [2024-07-21 12:19:10.171996] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:11.412 [2024-07-21 12:19:10.174357] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:11.412 [2024-07-21 12:19:10.174541] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:11.412 BaseBdev1 00:37:11.412 12:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:37:11.412 12:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:37:11.670 BaseBdev2_malloc 00:37:11.670 12:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:11.929 [2024-07-21 12:19:10.745315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:11.929 [2024-07-21 12:19:10.745563] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:11.929 [2024-07-21 12:19:10.745692] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:37:11.929 [2024-07-21 12:19:10.745924] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:11.929 [2024-07-21 12:19:10.747809] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:11.929 [2024-07-21 12:19:10.747993] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:11.929 BaseBdev2 00:37:11.929 12:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:37:12.188 spare_malloc 00:37:12.188 12:19:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:12.447 spare_delay 00:37:12.447 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:12.706 [2024-07-21 12:19:11.424062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:12.706 [2024-07-21 12:19:11.424268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:12.706 [2024-07-21 12:19:11.424348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:37:12.706 [2024-07-21 12:19:11.424607] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:12.706 [2024-07-21 12:19:11.426789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:12.706 [2024-07-21 12:19:11.427008] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:12.706 spare 00:37:12.706 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:37:12.965 [2024-07-21 12:19:11.672166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:12.965 [2024-07-21 12:19:11.674045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:12.965 [2024-07-21 12:19:11.674405] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008d80 00:37:12.965 [2024-07-21 12:19:11.674534] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:12.965 [2024-07-21 12:19:11.674799] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:37:12.965 [2024-07-21 12:19:11.675033] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008d80 00:37:12.965 [2024-07-21 12:19:11.675138] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008d80 00:37:12.965 [2024-07-21 12:19:11.675317] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:12.965 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:12.965 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:12.965 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:12.965 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:12.965 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:12.966 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:12.966 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:12.966 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:12.966 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:12.966 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:12.966 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:12.966 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:13.224 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:13.224 "name": "raid_bdev1", 00:37:13.224 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:13.224 "strip_size_kb": 0, 00:37:13.224 "state": "online", 00:37:13.224 "raid_level": "raid1", 00:37:13.224 "superblock": true, 00:37:13.224 "num_base_bdevs": 2, 00:37:13.224 "num_base_bdevs_discovered": 2, 00:37:13.224 "num_base_bdevs_operational": 2, 00:37:13.224 "base_bdevs_list": [ 00:37:13.224 { 00:37:13.224 "name": "BaseBdev1", 00:37:13.225 "uuid": "c8e8be24-4001-56c4-a75d-97273bf7f8b2", 00:37:13.225 "is_configured": true, 00:37:13.225 "data_offset": 256, 00:37:13.225 "data_size": 7936 00:37:13.225 }, 00:37:13.225 { 00:37:13.225 "name": "BaseBdev2", 00:37:13.225 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:13.225 "is_configured": true, 00:37:13.225 "data_offset": 256, 00:37:13.225 "data_size": 7936 00:37:13.225 } 00:37:13.225 ] 00:37:13.225 }' 00:37:13.225 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:13.225 12:19:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:13.791 12:19:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:37:13.791 12:19:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:37:14.049 [2024-07-21 12:19:12.736487] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:14.049 12:19:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:37:14.049 12:19:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.049 12:19:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:14.307 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:37:14.307 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:37:14.307 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:37:14.307 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:37:14.566 [2024-07-21 12:19:13.284377] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.566 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:14.824 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:14.824 "name": "raid_bdev1", 00:37:14.824 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:14.824 "strip_size_kb": 0, 00:37:14.824 "state": "online", 00:37:14.824 "raid_level": "raid1", 00:37:14.824 "superblock": true, 00:37:14.824 "num_base_bdevs": 2, 00:37:14.824 "num_base_bdevs_discovered": 1, 00:37:14.824 "num_base_bdevs_operational": 1, 00:37:14.824 "base_bdevs_list": [ 00:37:14.824 { 00:37:14.824 "name": null, 00:37:14.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:14.824 "is_configured": false, 00:37:14.824 "data_offset": 256, 00:37:14.824 "data_size": 7936 00:37:14.824 }, 00:37:14.824 { 00:37:14.824 "name": "BaseBdev2", 00:37:14.824 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:14.824 "is_configured": true, 00:37:14.824 "data_offset": 256, 00:37:14.824 "data_size": 7936 00:37:14.824 } 00:37:14.824 ] 00:37:14.824 }' 00:37:14.824 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:14.824 12:19:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:15.389 12:19:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:15.645 [2024-07-21 12:19:14.404568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:15.645 [2024-07-21 12:19:14.408479] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:37:15.645 [2024-07-21 12:19:14.410484] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:15.645 12:19:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:37:16.575 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:16.575 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:16.575 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:16.575 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:16.575 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:16.575 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:16.575 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:16.833 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:16.833 "name": "raid_bdev1", 00:37:16.833 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:16.833 "strip_size_kb": 0, 00:37:16.833 "state": "online", 00:37:16.833 "raid_level": "raid1", 00:37:16.833 "superblock": true, 00:37:16.833 "num_base_bdevs": 2, 00:37:16.833 "num_base_bdevs_discovered": 2, 00:37:16.833 "num_base_bdevs_operational": 2, 00:37:16.833 "process": { 00:37:16.833 "type": "rebuild", 00:37:16.833 "target": "spare", 00:37:16.833 "progress": { 00:37:16.833 "blocks": 3072, 00:37:16.833 "percent": 38 00:37:16.833 } 00:37:16.833 }, 00:37:16.833 "base_bdevs_list": [ 00:37:16.833 { 00:37:16.833 "name": "spare", 00:37:16.833 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:16.833 "is_configured": true, 00:37:16.833 "data_offset": 256, 00:37:16.833 "data_size": 7936 00:37:16.833 }, 00:37:16.833 { 00:37:16.833 "name": "BaseBdev2", 00:37:16.833 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:16.833 "is_configured": true, 00:37:16.833 "data_offset": 256, 00:37:16.833 "data_size": 7936 00:37:16.834 } 00:37:16.834 ] 00:37:16.834 }' 00:37:16.834 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:17.092 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:17.092 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:17.092 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:17.092 12:19:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:17.351 [2024-07-21 12:19:15.980397] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:17.351 [2024-07-21 12:19:16.020420] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:17.351 [2024-07-21 12:19:16.020636] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:17.351 [2024-07-21 12:19:16.020808] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:17.351 [2024-07-21 12:19:16.020926] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:17.351 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:17.609 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:17.609 "name": "raid_bdev1", 00:37:17.609 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:17.609 "strip_size_kb": 0, 00:37:17.609 "state": "online", 00:37:17.609 "raid_level": "raid1", 00:37:17.609 "superblock": true, 00:37:17.609 "num_base_bdevs": 2, 00:37:17.609 "num_base_bdevs_discovered": 1, 00:37:17.609 "num_base_bdevs_operational": 1, 00:37:17.609 "base_bdevs_list": [ 00:37:17.609 { 00:37:17.609 "name": null, 00:37:17.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.609 "is_configured": false, 00:37:17.609 "data_offset": 256, 00:37:17.609 "data_size": 7936 00:37:17.609 }, 00:37:17.609 { 00:37:17.609 "name": "BaseBdev2", 00:37:17.609 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:17.609 "is_configured": true, 00:37:17.609 "data_offset": 256, 00:37:17.609 "data_size": 7936 00:37:17.609 } 00:37:17.609 ] 00:37:17.609 }' 00:37:17.609 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:17.609 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.175 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:18.175 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:18.175 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:18.175 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:18.175 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:18.175 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:18.175 12:19:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:18.433 12:19:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:18.433 "name": "raid_bdev1", 00:37:18.433 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:18.433 "strip_size_kb": 0, 00:37:18.433 "state": "online", 00:37:18.433 "raid_level": "raid1", 00:37:18.433 "superblock": true, 00:37:18.433 "num_base_bdevs": 2, 00:37:18.433 "num_base_bdevs_discovered": 1, 00:37:18.433 "num_base_bdevs_operational": 1, 00:37:18.433 "base_bdevs_list": [ 00:37:18.433 { 00:37:18.433 "name": null, 00:37:18.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.433 "is_configured": false, 00:37:18.433 "data_offset": 256, 00:37:18.433 "data_size": 7936 00:37:18.433 }, 00:37:18.433 { 00:37:18.433 "name": "BaseBdev2", 00:37:18.433 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:18.433 "is_configured": true, 00:37:18.433 "data_offset": 256, 00:37:18.433 "data_size": 7936 00:37:18.433 } 00:37:18.433 ] 00:37:18.433 }' 00:37:18.433 12:19:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:18.433 12:19:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:18.433 12:19:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:18.690 12:19:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:18.690 12:19:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:18.690 [2024-07-21 12:19:17.542153] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:18.690 [2024-07-21 12:19:17.545749] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:37:18.690 [2024-07-21 12:19:17.547747] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:18.948 12:19:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:37:19.879 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:19.880 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:19.880 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:19.880 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:19.880 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:19.880 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:19.880 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:20.138 "name": "raid_bdev1", 00:37:20.138 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:20.138 "strip_size_kb": 0, 00:37:20.138 "state": "online", 00:37:20.138 "raid_level": "raid1", 00:37:20.138 "superblock": true, 00:37:20.138 "num_base_bdevs": 2, 00:37:20.138 "num_base_bdevs_discovered": 2, 00:37:20.138 "num_base_bdevs_operational": 2, 00:37:20.138 "process": { 00:37:20.138 "type": "rebuild", 00:37:20.138 "target": "spare", 00:37:20.138 "progress": { 00:37:20.138 "blocks": 3072, 00:37:20.138 "percent": 38 00:37:20.138 } 00:37:20.138 }, 00:37:20.138 "base_bdevs_list": [ 00:37:20.138 { 00:37:20.138 "name": "spare", 00:37:20.138 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:20.138 "is_configured": true, 00:37:20.138 "data_offset": 256, 00:37:20.138 "data_size": 7936 00:37:20.138 }, 00:37:20.138 { 00:37:20.138 "name": "BaseBdev2", 00:37:20.138 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:20.138 "is_configured": true, 00:37:20.138 "data_offset": 256, 00:37:20.138 "data_size": 7936 00:37:20.138 } 00:37:20.138 ] 00:37:20.138 }' 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:37:20.138 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=1443 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:20.138 12:19:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:20.396 12:19:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:20.396 "name": "raid_bdev1", 00:37:20.396 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:20.396 "strip_size_kb": 0, 00:37:20.396 "state": "online", 00:37:20.396 "raid_level": "raid1", 00:37:20.396 "superblock": true, 00:37:20.396 "num_base_bdevs": 2, 00:37:20.396 "num_base_bdevs_discovered": 2, 00:37:20.396 "num_base_bdevs_operational": 2, 00:37:20.396 "process": { 00:37:20.396 "type": "rebuild", 00:37:20.396 "target": "spare", 00:37:20.396 "progress": { 00:37:20.396 "blocks": 3840, 00:37:20.396 "percent": 48 00:37:20.396 } 00:37:20.396 }, 00:37:20.396 "base_bdevs_list": [ 00:37:20.396 { 00:37:20.396 "name": "spare", 00:37:20.396 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:20.396 "is_configured": true, 00:37:20.396 "data_offset": 256, 00:37:20.396 "data_size": 7936 00:37:20.396 }, 00:37:20.396 { 00:37:20.396 "name": "BaseBdev2", 00:37:20.396 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:20.396 "is_configured": true, 00:37:20.396 "data_offset": 256, 00:37:20.396 "data_size": 7936 00:37:20.396 } 00:37:20.396 ] 00:37:20.396 }' 00:37:20.396 12:19:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:20.396 12:19:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:20.396 12:19:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:20.396 12:19:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:20.396 12:19:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:21.773 "name": "raid_bdev1", 00:37:21.773 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:21.773 "strip_size_kb": 0, 00:37:21.773 "state": "online", 00:37:21.773 "raid_level": "raid1", 00:37:21.773 "superblock": true, 00:37:21.773 "num_base_bdevs": 2, 00:37:21.773 "num_base_bdevs_discovered": 2, 00:37:21.773 "num_base_bdevs_operational": 2, 00:37:21.773 "process": { 00:37:21.773 "type": "rebuild", 00:37:21.773 "target": "spare", 00:37:21.773 "progress": { 00:37:21.773 "blocks": 7168, 00:37:21.773 "percent": 90 00:37:21.773 } 00:37:21.773 }, 00:37:21.773 "base_bdevs_list": [ 00:37:21.773 { 00:37:21.773 "name": "spare", 00:37:21.773 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:21.773 "is_configured": true, 00:37:21.773 "data_offset": 256, 00:37:21.773 "data_size": 7936 00:37:21.773 }, 00:37:21.773 { 00:37:21.773 "name": "BaseBdev2", 00:37:21.773 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:21.773 "is_configured": true, 00:37:21.773 "data_offset": 256, 00:37:21.773 "data_size": 7936 00:37:21.773 } 00:37:21.773 ] 00:37:21.773 }' 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:21.773 12:19:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:37:22.032 [2024-07-21 12:19:20.664267] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:22.032 [2024-07-21 12:19:20.664487] bdev_raid.c:2504:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:22.032 [2024-07-21 12:19:20.664733] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:22.968 "name": "raid_bdev1", 00:37:22.968 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:22.968 "strip_size_kb": 0, 00:37:22.968 "state": "online", 00:37:22.968 "raid_level": "raid1", 00:37:22.968 "superblock": true, 00:37:22.968 "num_base_bdevs": 2, 00:37:22.968 "num_base_bdevs_discovered": 2, 00:37:22.968 "num_base_bdevs_operational": 2, 00:37:22.968 "base_bdevs_list": [ 00:37:22.968 { 00:37:22.968 "name": "spare", 00:37:22.968 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:22.968 "is_configured": true, 00:37:22.968 "data_offset": 256, 00:37:22.968 "data_size": 7936 00:37:22.968 }, 00:37:22.968 { 00:37:22.968 "name": "BaseBdev2", 00:37:22.968 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:22.968 "is_configured": true, 00:37:22.968 "data_offset": 256, 00:37:22.968 "data_size": 7936 00:37:22.968 } 00:37:22.968 ] 00:37:22.968 }' 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:22.968 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:23.227 12:19:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:23.487 "name": "raid_bdev1", 00:37:23.487 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:23.487 "strip_size_kb": 0, 00:37:23.487 "state": "online", 00:37:23.487 "raid_level": "raid1", 00:37:23.487 "superblock": true, 00:37:23.487 "num_base_bdevs": 2, 00:37:23.487 "num_base_bdevs_discovered": 2, 00:37:23.487 "num_base_bdevs_operational": 2, 00:37:23.487 "base_bdevs_list": [ 00:37:23.487 { 00:37:23.487 "name": "spare", 00:37:23.487 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:23.487 "is_configured": true, 00:37:23.487 "data_offset": 256, 00:37:23.487 "data_size": 7936 00:37:23.487 }, 00:37:23.487 { 00:37:23.487 "name": "BaseBdev2", 00:37:23.487 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:23.487 "is_configured": true, 00:37:23.487 "data_offset": 256, 00:37:23.487 "data_size": 7936 00:37:23.487 } 00:37:23.487 ] 00:37:23.487 }' 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:23.487 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.746 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:23.747 "name": "raid_bdev1", 00:37:23.747 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:23.747 "strip_size_kb": 0, 00:37:23.747 "state": "online", 00:37:23.747 "raid_level": "raid1", 00:37:23.747 "superblock": true, 00:37:23.747 "num_base_bdevs": 2, 00:37:23.747 "num_base_bdevs_discovered": 2, 00:37:23.747 "num_base_bdevs_operational": 2, 00:37:23.747 "base_bdevs_list": [ 00:37:23.747 { 00:37:23.747 "name": "spare", 00:37:23.747 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:23.747 "is_configured": true, 00:37:23.747 "data_offset": 256, 00:37:23.747 "data_size": 7936 00:37:23.747 }, 00:37:23.747 { 00:37:23.747 "name": "BaseBdev2", 00:37:23.747 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:23.747 "is_configured": true, 00:37:23.747 "data_offset": 256, 00:37:23.747 "data_size": 7936 00:37:23.747 } 00:37:23.747 ] 00:37:23.747 }' 00:37:23.747 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:23.747 12:19:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.313 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:37:24.570 [2024-07-21 12:19:23.381183] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:24.570 [2024-07-21 12:19:23.381344] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:24.570 [2024-07-21 12:19:23.381567] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:24.570 [2024-07-21 12:19:23.381750] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:24.570 [2024-07-21 12:19:23.381877] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008d80 name raid_bdev1, state offline 00:37:24.570 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:24.570 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:37:24.828 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:37:24.828 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:37:24.828 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:37:24.828 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:25.086 12:19:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:25.343 [2024-07-21 12:19:24.041292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:25.343 [2024-07-21 12:19:24.041569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:25.343 [2024-07-21 12:19:24.041646] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:37:25.343 [2024-07-21 12:19:24.041879] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:25.343 [2024-07-21 12:19:24.043910] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:25.343 [2024-07-21 12:19:24.044101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:25.343 [2024-07-21 12:19:24.044277] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:25.343 [2024-07-21 12:19:24.044430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:25.343 [2024-07-21 12:19:24.044684] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:25.343 spare 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:25.343 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:25.343 [2024-07-21 12:19:24.144891] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009f80 00:37:25.343 [2024-07-21 12:19:24.145030] bdev_raid.c:1695:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:25.343 [2024-07-21 12:19:24.145226] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:25.343 [2024-07-21 12:19:24.145421] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009f80 00:37:25.343 [2024-07-21 12:19:24.145522] bdev_raid.c:1725:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009f80 00:37:25.343 [2024-07-21 12:19:24.145696] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:25.601 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:25.601 "name": "raid_bdev1", 00:37:25.601 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:25.601 "strip_size_kb": 0, 00:37:25.601 "state": "online", 00:37:25.601 "raid_level": "raid1", 00:37:25.601 "superblock": true, 00:37:25.601 "num_base_bdevs": 2, 00:37:25.601 "num_base_bdevs_discovered": 2, 00:37:25.601 "num_base_bdevs_operational": 2, 00:37:25.601 "base_bdevs_list": [ 00:37:25.601 { 00:37:25.601 "name": "spare", 00:37:25.601 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:25.601 "is_configured": true, 00:37:25.601 "data_offset": 256, 00:37:25.601 "data_size": 7936 00:37:25.601 }, 00:37:25.601 { 00:37:25.601 "name": "BaseBdev2", 00:37:25.601 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:25.601 "is_configured": true, 00:37:25.601 "data_offset": 256, 00:37:25.601 "data_size": 7936 00:37:25.601 } 00:37:25.601 ] 00:37:25.601 }' 00:37:25.601 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:25.601 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:26.166 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:26.166 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:26.166 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:26.166 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:26.166 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:26.166 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.166 12:19:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.424 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:26.424 "name": "raid_bdev1", 00:37:26.424 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:26.424 "strip_size_kb": 0, 00:37:26.424 "state": "online", 00:37:26.424 "raid_level": "raid1", 00:37:26.424 "superblock": true, 00:37:26.424 "num_base_bdevs": 2, 00:37:26.424 "num_base_bdevs_discovered": 2, 00:37:26.424 "num_base_bdevs_operational": 2, 00:37:26.424 "base_bdevs_list": [ 00:37:26.424 { 00:37:26.424 "name": "spare", 00:37:26.424 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:26.424 "is_configured": true, 00:37:26.424 "data_offset": 256, 00:37:26.424 "data_size": 7936 00:37:26.424 }, 00:37:26.424 { 00:37:26.424 "name": "BaseBdev2", 00:37:26.424 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:26.424 "is_configured": true, 00:37:26.424 "data_offset": 256, 00:37:26.424 "data_size": 7936 00:37:26.424 } 00:37:26.424 ] 00:37:26.424 }' 00:37:26.424 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:26.424 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:26.424 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:26.424 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:26.425 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:26.425 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:26.682 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:37:26.682 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:37:26.939 [2024-07-21 12:19:25.616633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.939 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:27.197 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:27.197 "name": "raid_bdev1", 00:37:27.197 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:27.197 "strip_size_kb": 0, 00:37:27.197 "state": "online", 00:37:27.197 "raid_level": "raid1", 00:37:27.197 "superblock": true, 00:37:27.197 "num_base_bdevs": 2, 00:37:27.197 "num_base_bdevs_discovered": 1, 00:37:27.197 "num_base_bdevs_operational": 1, 00:37:27.197 "base_bdevs_list": [ 00:37:27.197 { 00:37:27.197 "name": null, 00:37:27.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:27.197 "is_configured": false, 00:37:27.197 "data_offset": 256, 00:37:27.197 "data_size": 7936 00:37:27.197 }, 00:37:27.197 { 00:37:27.197 "name": "BaseBdev2", 00:37:27.197 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:27.197 "is_configured": true, 00:37:27.197 "data_offset": 256, 00:37:27.197 "data_size": 7936 00:37:27.197 } 00:37:27.197 ] 00:37:27.197 }' 00:37:27.197 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:27.197 12:19:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:27.762 12:19:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:37:27.762 [2024-07-21 12:19:26.628803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:27.762 [2024-07-21 12:19:26.629100] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:27.762 [2024-07-21 12:19:26.629257] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:28.019 [2024-07-21 12:19:26.629381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:28.019 [2024-07-21 12:19:26.633229] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:28.019 [2024-07-21 12:19:26.635500] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:28.019 12:19:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:37:28.952 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:28.952 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:28.952 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:28.952 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:28.952 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:28.952 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:28.952 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.209 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:29.209 "name": "raid_bdev1", 00:37:29.209 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:29.209 "strip_size_kb": 0, 00:37:29.209 "state": "online", 00:37:29.209 "raid_level": "raid1", 00:37:29.209 "superblock": true, 00:37:29.209 "num_base_bdevs": 2, 00:37:29.209 "num_base_bdevs_discovered": 2, 00:37:29.209 "num_base_bdevs_operational": 2, 00:37:29.209 "process": { 00:37:29.209 "type": "rebuild", 00:37:29.209 "target": "spare", 00:37:29.209 "progress": { 00:37:29.209 "blocks": 3072, 00:37:29.209 "percent": 38 00:37:29.209 } 00:37:29.209 }, 00:37:29.209 "base_bdevs_list": [ 00:37:29.209 { 00:37:29.209 "name": "spare", 00:37:29.209 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:29.209 "is_configured": true, 00:37:29.209 "data_offset": 256, 00:37:29.209 "data_size": 7936 00:37:29.209 }, 00:37:29.209 { 00:37:29.209 "name": "BaseBdev2", 00:37:29.209 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:29.209 "is_configured": true, 00:37:29.209 "data_offset": 256, 00:37:29.209 "data_size": 7936 00:37:29.209 } 00:37:29.209 ] 00:37:29.209 }' 00:37:29.209 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:29.209 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:29.209 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:29.209 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:29.209 12:19:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:29.467 [2024-07-21 12:19:28.218095] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:29.467 [2024-07-21 12:19:28.243780] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:29.467 [2024-07-21 12:19:28.243983] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:29.467 [2024-07-21 12:19:28.244123] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:29.467 [2024-07-21 12:19:28.244222] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:29.467 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.724 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:29.724 "name": "raid_bdev1", 00:37:29.724 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:29.724 "strip_size_kb": 0, 00:37:29.724 "state": "online", 00:37:29.724 "raid_level": "raid1", 00:37:29.724 "superblock": true, 00:37:29.724 "num_base_bdevs": 2, 00:37:29.724 "num_base_bdevs_discovered": 1, 00:37:29.724 "num_base_bdevs_operational": 1, 00:37:29.724 "base_bdevs_list": [ 00:37:29.724 { 00:37:29.724 "name": null, 00:37:29.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.724 "is_configured": false, 00:37:29.724 "data_offset": 256, 00:37:29.724 "data_size": 7936 00:37:29.724 }, 00:37:29.724 { 00:37:29.724 "name": "BaseBdev2", 00:37:29.724 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:29.724 "is_configured": true, 00:37:29.724 "data_offset": 256, 00:37:29.724 "data_size": 7936 00:37:29.724 } 00:37:29.724 ] 00:37:29.724 }' 00:37:29.724 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:29.724 12:19:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:30.659 12:19:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:37:30.659 [2024-07-21 12:19:29.400694] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:30.659 [2024-07-21 12:19:29.400914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:30.659 [2024-07-21 12:19:29.401062] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:37:30.659 [2024-07-21 12:19:29.401202] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:30.659 [2024-07-21 12:19:29.401521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:30.659 [2024-07-21 12:19:29.401702] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:30.659 [2024-07-21 12:19:29.401880] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:30.659 [2024-07-21 12:19:29.401998] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:30.659 [2024-07-21 12:19:29.402090] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:30.659 [2024-07-21 12:19:29.402252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:30.659 [2024-07-21 12:19:29.405372] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:37:30.659 spare 00:37:30.659 [2024-07-21 12:19:29.407487] bdev_raid.c:2824:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:30.659 12:19:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:37:31.594 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:31.594 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:31.594 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:37:31.594 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:37:31.594 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:31.594 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:31.594 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.852 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:31.852 "name": "raid_bdev1", 00:37:31.852 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:31.852 "strip_size_kb": 0, 00:37:31.852 "state": "online", 00:37:31.852 "raid_level": "raid1", 00:37:31.852 "superblock": true, 00:37:31.852 "num_base_bdevs": 2, 00:37:31.852 "num_base_bdevs_discovered": 2, 00:37:31.852 "num_base_bdevs_operational": 2, 00:37:31.852 "process": { 00:37:31.852 "type": "rebuild", 00:37:31.852 "target": "spare", 00:37:31.852 "progress": { 00:37:31.852 "blocks": 3072, 00:37:31.852 "percent": 38 00:37:31.852 } 00:37:31.852 }, 00:37:31.852 "base_bdevs_list": [ 00:37:31.852 { 00:37:31.852 "name": "spare", 00:37:31.852 "uuid": "fc9d01e1-2663-5814-a04d-0d44b348c498", 00:37:31.852 "is_configured": true, 00:37:31.853 "data_offset": 256, 00:37:31.853 "data_size": 7936 00:37:31.853 }, 00:37:31.853 { 00:37:31.853 "name": "BaseBdev2", 00:37:31.853 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:31.853 "is_configured": true, 00:37:31.853 "data_offset": 256, 00:37:31.853 "data_size": 7936 00:37:31.853 } 00:37:31.853 ] 00:37:31.853 }' 00:37:31.853 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:32.110 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:32.110 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:32.110 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:37:32.110 12:19:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:37:32.368 [2024-07-21 12:19:31.006623] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:32.368 [2024-07-21 12:19:31.015421] bdev_raid.c:2513:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:32.368 [2024-07-21 12:19:31.015620] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:32.368 [2024-07-21 12:19:31.015749] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:32.368 [2024-07-21 12:19:31.015792] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:32.368 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.625 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:32.625 "name": "raid_bdev1", 00:37:32.625 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:32.625 "strip_size_kb": 0, 00:37:32.625 "state": "online", 00:37:32.625 "raid_level": "raid1", 00:37:32.625 "superblock": true, 00:37:32.625 "num_base_bdevs": 2, 00:37:32.625 "num_base_bdevs_discovered": 1, 00:37:32.625 "num_base_bdevs_operational": 1, 00:37:32.625 "base_bdevs_list": [ 00:37:32.625 { 00:37:32.625 "name": null, 00:37:32.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:32.625 "is_configured": false, 00:37:32.625 "data_offset": 256, 00:37:32.625 "data_size": 7936 00:37:32.625 }, 00:37:32.625 { 00:37:32.625 "name": "BaseBdev2", 00:37:32.625 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:32.625 "is_configured": true, 00:37:32.625 "data_offset": 256, 00:37:32.625 "data_size": 7936 00:37:32.625 } 00:37:32.625 ] 00:37:32.625 }' 00:37:32.625 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:32.625 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:33.195 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:33.195 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:33.195 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:33.195 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:33.195 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:33.195 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:33.195 12:19:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:33.476 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:33.476 "name": "raid_bdev1", 00:37:33.476 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:33.476 "strip_size_kb": 0, 00:37:33.476 "state": "online", 00:37:33.476 "raid_level": "raid1", 00:37:33.476 "superblock": true, 00:37:33.476 "num_base_bdevs": 2, 00:37:33.476 "num_base_bdevs_discovered": 1, 00:37:33.476 "num_base_bdevs_operational": 1, 00:37:33.476 "base_bdevs_list": [ 00:37:33.476 { 00:37:33.476 "name": null, 00:37:33.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:33.476 "is_configured": false, 00:37:33.476 "data_offset": 256, 00:37:33.476 "data_size": 7936 00:37:33.476 }, 00:37:33.476 { 00:37:33.476 "name": "BaseBdev2", 00:37:33.476 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:33.476 "is_configured": true, 00:37:33.476 "data_offset": 256, 00:37:33.476 "data_size": 7936 00:37:33.476 } 00:37:33.476 ] 00:37:33.476 }' 00:37:33.476 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:33.476 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:33.476 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:33.476 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:33.476 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:37:33.748 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:34.006 [2024-07-21 12:19:32.752857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:34.006 [2024-07-21 12:19:32.753065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:34.006 [2024-07-21 12:19:32.753158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:34.006 [2024-07-21 12:19:32.753396] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:34.006 [2024-07-21 12:19:32.753671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:34.006 [2024-07-21 12:19:32.753801] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:34.006 [2024-07-21 12:19:32.753969] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:34.006 [2024-07-21 12:19:32.754075] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:34.006 [2024-07-21 12:19:32.754176] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:34.006 BaseBdev1 00:37:34.006 12:19:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:34.937 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:34.938 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:34.938 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:34.938 12:19:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.195 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:35.195 "name": "raid_bdev1", 00:37:35.195 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:35.195 "strip_size_kb": 0, 00:37:35.195 "state": "online", 00:37:35.195 "raid_level": "raid1", 00:37:35.195 "superblock": true, 00:37:35.195 "num_base_bdevs": 2, 00:37:35.195 "num_base_bdevs_discovered": 1, 00:37:35.195 "num_base_bdevs_operational": 1, 00:37:35.195 "base_bdevs_list": [ 00:37:35.195 { 00:37:35.195 "name": null, 00:37:35.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:35.195 "is_configured": false, 00:37:35.195 "data_offset": 256, 00:37:35.195 "data_size": 7936 00:37:35.195 }, 00:37:35.195 { 00:37:35.195 "name": "BaseBdev2", 00:37:35.195 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:35.195 "is_configured": true, 00:37:35.195 "data_offset": 256, 00:37:35.195 "data_size": 7936 00:37:35.195 } 00:37:35.195 ] 00:37:35.195 }' 00:37:35.195 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:35.195 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:35.759 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:35.759 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:35.759 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:35.759 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:35.759 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:35.759 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:35.759 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.017 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:36.017 "name": "raid_bdev1", 00:37:36.017 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:36.017 "strip_size_kb": 0, 00:37:36.017 "state": "online", 00:37:36.017 "raid_level": "raid1", 00:37:36.017 "superblock": true, 00:37:36.017 "num_base_bdevs": 2, 00:37:36.017 "num_base_bdevs_discovered": 1, 00:37:36.017 "num_base_bdevs_operational": 1, 00:37:36.017 "base_bdevs_list": [ 00:37:36.017 { 00:37:36.017 "name": null, 00:37:36.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.017 "is_configured": false, 00:37:36.017 "data_offset": 256, 00:37:36.017 "data_size": 7936 00:37:36.017 }, 00:37:36.017 { 00:37:36.017 "name": "BaseBdev2", 00:37:36.017 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:36.017 "is_configured": true, 00:37:36.017 "data_offset": 256, 00:37:36.017 "data_size": 7936 00:37:36.017 } 00:37:36.017 ] 00:37:36.017 }' 00:37:36.017 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:36.017 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:36.017 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:37:36.275 12:19:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:36.531 [2024-07-21 12:19:35.177325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:36.531 [2024-07-21 12:19:35.177654] bdev_raid.c:3562:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:36.531 [2024-07-21 12:19:35.177779] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:36.531 request: 00:37:36.531 { 00:37:36.531 "raid_bdev": "raid_bdev1", 00:37:36.531 "base_bdev": "BaseBdev1", 00:37:36.531 "method": "bdev_raid_add_base_bdev", 00:37:36.531 "req_id": 1 00:37:36.531 } 00:37:36.531 Got JSON-RPC error response 00:37:36.531 response: 00:37:36.531 { 00:37:36.531 "code": -22, 00:37:36.531 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:36.531 } 00:37:36.531 12:19:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:37:36.531 12:19:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:36.531 12:19:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:36.531 12:19:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:36.531 12:19:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:37.463 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:37.720 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:37:37.720 "name": "raid_bdev1", 00:37:37.720 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:37.720 "strip_size_kb": 0, 00:37:37.720 "state": "online", 00:37:37.720 "raid_level": "raid1", 00:37:37.720 "superblock": true, 00:37:37.720 "num_base_bdevs": 2, 00:37:37.720 "num_base_bdevs_discovered": 1, 00:37:37.720 "num_base_bdevs_operational": 1, 00:37:37.720 "base_bdevs_list": [ 00:37:37.720 { 00:37:37.720 "name": null, 00:37:37.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:37.720 "is_configured": false, 00:37:37.720 "data_offset": 256, 00:37:37.720 "data_size": 7936 00:37:37.720 }, 00:37:37.720 { 00:37:37.720 "name": "BaseBdev2", 00:37:37.720 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:37.720 "is_configured": true, 00:37:37.720 "data_offset": 256, 00:37:37.720 "data_size": 7936 00:37:37.720 } 00:37:37.720 ] 00:37:37.720 }' 00:37:37.720 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:37:37.720 12:19:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:38.284 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:38.284 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:37:38.284 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:37:38.284 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:37:38.284 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:37:38.284 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:37:38.284 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.541 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:38.541 "name": "raid_bdev1", 00:37:38.541 "uuid": "7e049b82-acad-4cd1-a938-e2d840b0c181", 00:37:38.541 "strip_size_kb": 0, 00:37:38.541 "state": "online", 00:37:38.541 "raid_level": "raid1", 00:37:38.541 "superblock": true, 00:37:38.541 "num_base_bdevs": 2, 00:37:38.541 "num_base_bdevs_discovered": 1, 00:37:38.541 "num_base_bdevs_operational": 1, 00:37:38.541 "base_bdevs_list": [ 00:37:38.541 { 00:37:38.541 "name": null, 00:37:38.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:38.541 "is_configured": false, 00:37:38.541 "data_offset": 256, 00:37:38.541 "data_size": 7936 00:37:38.541 }, 00:37:38.541 { 00:37:38.541 "name": "BaseBdev2", 00:37:38.541 "uuid": "74a33d91-21c2-5cdb-b23d-1f3330ca95ec", 00:37:38.541 "is_configured": true, 00:37:38.541 "data_offset": 256, 00:37:38.541 "data_size": 7936 00:37:38.541 } 00:37:38.541 ] 00:37:38.541 }' 00:37:38.541 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:37:38.541 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:37:38.541 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 173249 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@946 -- # '[' -z 173249 ']' 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # kill -0 173249 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # uname 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 173249 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # echo 'killing process with pid 173249' 00:37:38.800 killing process with pid 173249 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@965 -- # kill 173249 00:37:38.800 Received shutdown signal, test time was about 60.000000 seconds 00:37:38.800 00:37:38.800 Latency(us) 00:37:38.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:38.800 =================================================================================================================== 00:37:38.800 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:38.800 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # wait 173249 00:37:38.800 [2024-07-21 12:19:37.432945] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:38.800 [2024-07-21 12:19:37.433315] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:38.800 [2024-07-21 12:19:37.433536] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:38.800 [2024-07-21 12:19:37.433667] bdev_raid.c: 366:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009f80 name raid_bdev1, state offline 00:37:38.800 [2024-07-21 12:19:37.476606] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:39.059 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:37:39.059 00:37:39.059 real 0m29.071s 00:37:39.059 user 0m47.806s 00:37:39.059 sys 0m2.615s 00:37:39.059 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:39.059 12:19:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:39.059 ************************************ 00:37:39.059 END TEST raid_rebuild_test_sb_md_interleaved 00:37:39.059 ************************************ 00:37:39.059 12:19:37 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:37:39.059 12:19:37 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:37:39.059 12:19:37 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 173249 ']' 00:37:39.059 12:19:37 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 173249 00:37:39.059 12:19:37 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:37:39.059 ************************************ 00:37:39.059 END TEST bdev_raid 00:37:39.059 ************************************ 00:37:39.059 00:37:39.059 real 23m52.417s 00:37:39.059 user 41m56.421s 00:37:39.059 sys 2m56.308s 00:37:39.059 12:19:37 bdev_raid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:39.059 12:19:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:39.059 12:19:37 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:39.059 12:19:37 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:39.059 12:19:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:39.059 12:19:37 -- common/autotest_common.sh@10 -- # set +x 00:37:39.059 ************************************ 00:37:39.059 START TEST bdevperf_config 00:37:39.059 ************************************ 00:37:39.059 12:19:37 bdevperf_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:37:39.318 * Looking for test storage... 00:37:39.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:39.318 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:39.318 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:39.318 12:19:37 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:39.318 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:39.318 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:39.318 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:39.318 12:19:38 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:42.604 12:19:40 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-21 12:19:38.063706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:42.604 [2024-07-21 12:19:38.063900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174076 ] 00:37:42.604 Using job config with 4 jobs 00:37:42.604 [2024-07-21 12:19:38.216652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.604 [2024-07-21 12:19:38.305617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.604 cpumask for '\''job0'\'' is too big 00:37:42.604 cpumask for '\''job1'\'' is too big 00:37:42.604 cpumask for '\''job2'\'' is too big 00:37:42.604 cpumask for '\''job3'\'' is too big 00:37:42.604 Running I/O for 2 seconds... 00:37:42.604 00:37:42.604 Latency(us) 00:37:42.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.01 31802.59 31.06 0.00 0.00 8045.68 1616.06 12988.04 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.01 31781.15 31.04 0.00 0.00 8036.71 1608.61 11319.85 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.02 31760.65 31.02 0.00 0.00 8026.87 1571.37 10545.34 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.02 31834.17 31.09 0.00 0.00 7993.66 696.32 10664.49 00:37:42.604 =================================================================================================================== 00:37:42.604 Total : 127178.56 124.20 0.00 0.00 8025.70 696.32 12988.04' 00:37:42.604 12:19:40 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-21 12:19:38.063706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:42.604 [2024-07-21 12:19:38.063900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174076 ] 00:37:42.604 Using job config with 4 jobs 00:37:42.604 [2024-07-21 12:19:38.216652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.604 [2024-07-21 12:19:38.305617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.604 cpumask for '\''job0'\'' is too big 00:37:42.604 cpumask for '\''job1'\'' is too big 00:37:42.604 cpumask for '\''job2'\'' is too big 00:37:42.604 cpumask for '\''job3'\'' is too big 00:37:42.604 Running I/O for 2 seconds... 00:37:42.604 00:37:42.604 Latency(us) 00:37:42.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.01 31802.59 31.06 0.00 0.00 8045.68 1616.06 12988.04 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.01 31781.15 31.04 0.00 0.00 8036.71 1608.61 11319.85 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.02 31760.65 31.02 0.00 0.00 8026.87 1571.37 10545.34 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.02 31834.17 31.09 0.00 0.00 7993.66 696.32 10664.49 00:37:42.604 =================================================================================================================== 00:37:42.604 Total : 127178.56 124.20 0.00 0.00 8025.70 696.32 12988.04' 00:37:42.604 12:19:40 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-21 12:19:38.063706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:42.604 [2024-07-21 12:19:38.063900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174076 ] 00:37:42.604 Using job config with 4 jobs 00:37:42.604 [2024-07-21 12:19:38.216652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.604 [2024-07-21 12:19:38.305617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.604 cpumask for '\''job0'\'' is too big 00:37:42.604 cpumask for '\''job1'\'' is too big 00:37:42.604 cpumask for '\''job2'\'' is too big 00:37:42.604 cpumask for '\''job3'\'' is too big 00:37:42.604 Running I/O for 2 seconds... 00:37:42.604 00:37:42.604 Latency(us) 00:37:42.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.01 31802.59 31.06 0.00 0.00 8045.68 1616.06 12988.04 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.01 31781.15 31.04 0.00 0.00 8036.71 1608.61 11319.85 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.02 31760.65 31.02 0.00 0.00 8026.87 1571.37 10545.34 00:37:42.604 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:42.604 Malloc0 : 2.02 31834.17 31.09 0.00 0.00 7993.66 696.32 10664.49 00:37:42.604 =================================================================================================================== 00:37:42.604 Total : 127178.56 124.20 0.00 0.00 8025.70 696.32 12988.04' 00:37:42.604 12:19:40 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:42.604 12:19:40 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:42.604 12:19:40 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:37:42.604 12:19:40 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:42.605 [2024-07-21 12:19:40.949801] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:42.605 [2024-07-21 12:19:40.950293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174119 ] 00:37:42.605 [2024-07-21 12:19:41.117964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.605 [2024-07-21 12:19:41.200525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:42.605 cpumask for 'job0' is too big 00:37:42.605 cpumask for 'job1' is too big 00:37:42.605 cpumask for 'job2' is too big 00:37:42.605 cpumask for 'job3' is too big 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:37:45.134 Running I/O for 2 seconds... 00:37:45.134 00:37:45.134 Latency(us) 00:37:45.134 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.134 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:45.134 Malloc0 : 2.01 32350.72 31.59 0.00 0.00 7904.71 1616.06 12749.73 00:37:45.134 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:45.134 Malloc0 : 2.02 32360.81 31.60 0.00 0.00 7888.08 1549.03 11141.12 00:37:45.134 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:45.134 Malloc0 : 2.02 32339.66 31.58 0.00 0.00 7878.61 1593.72 9532.51 00:37:45.134 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:37:45.134 Malloc0 : 2.02 32318.91 31.56 0.00 0.00 7869.42 1541.59 8996.31 00:37:45.134 =================================================================================================================== 00:37:45.134 Total : 129370.10 126.34 0.00 0.00 7885.19 1541.59 12749.73' 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:45.134 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:45.134 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:45.134 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:45.134 12:19:43 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:48.412 12:19:46 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-21 12:19:43.854053] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:48.412 [2024-07-21 12:19:43.854302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174157 ] 00:37:48.412 Using job config with 3 jobs 00:37:48.412 [2024-07-21 12:19:44.018976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.412 [2024-07-21 12:19:44.112275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.412 cpumask for '\''job0'\'' is too big 00:37:48.412 cpumask for '\''job1'\'' is too big 00:37:48.412 cpumask for '\''job2'\'' is too big 00:37:48.412 Running I/O for 2 seconds... 00:37:48.412 00:37:48.413 Latency(us) 00:37:48.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43761.21 42.74 0.00 0.00 5843.33 1653.29 9115.46 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43773.82 42.75 0.00 0.00 5830.47 1519.24 7477.06 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43745.25 42.72 0.00 0.00 5824.53 1556.48 6732.33 00:37:48.413 =================================================================================================================== 00:37:48.413 Total : 131280.28 128.20 0.00 0.00 5832.77 1519.24 9115.46' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-21 12:19:43.854053] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:48.413 [2024-07-21 12:19:43.854302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174157 ] 00:37:48.413 Using job config with 3 jobs 00:37:48.413 [2024-07-21 12:19:44.018976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.413 [2024-07-21 12:19:44.112275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.413 cpumask for '\''job0'\'' is too big 00:37:48.413 cpumask for '\''job1'\'' is too big 00:37:48.413 cpumask for '\''job2'\'' is too big 00:37:48.413 Running I/O for 2 seconds... 00:37:48.413 00:37:48.413 Latency(us) 00:37:48.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43761.21 42.74 0.00 0.00 5843.33 1653.29 9115.46 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43773.82 42.75 0.00 0.00 5830.47 1519.24 7477.06 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43745.25 42.72 0.00 0.00 5824.53 1556.48 6732.33 00:37:48.413 =================================================================================================================== 00:37:48.413 Total : 131280.28 128.20 0.00 0.00 5832.77 1519.24 9115.46' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-21 12:19:43.854053] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:48.413 [2024-07-21 12:19:43.854302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174157 ] 00:37:48.413 Using job config with 3 jobs 00:37:48.413 [2024-07-21 12:19:44.018976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.413 [2024-07-21 12:19:44.112275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:48.413 cpumask for '\''job0'\'' is too big 00:37:48.413 cpumask for '\''job1'\'' is too big 00:37:48.413 cpumask for '\''job2'\'' is too big 00:37:48.413 Running I/O for 2 seconds... 00:37:48.413 00:37:48.413 Latency(us) 00:37:48.413 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43761.21 42.74 0.00 0.00 5843.33 1653.29 9115.46 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43773.82 42.75 0.00 0.00 5830.47 1519.24 7477.06 00:37:48.413 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:37:48.413 Malloc0 : 2.01 43745.25 42.72 0.00 0.00 5824.53 1556.48 6732.33 00:37:48.413 =================================================================================================================== 00:37:48.413 Total : 131280.28 128.20 0.00 0.00 5832.77 1519.24 9115.46' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:37:48.413 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:48.413 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:48.413 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:48.413 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:37:48.413 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:37:48.413 12:19:46 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:50.961 12:19:49 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-21 12:19:46.764452] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:50.961 [2024-07-21 12:19:46.764686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174203 ] 00:37:50.961 Using job config with 4 jobs 00:37:50.961 [2024-07-21 12:19:46.927738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.961 [2024-07-21 12:19:47.015657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.961 cpumask for '\''job0'\'' is too big 00:37:50.961 cpumask for '\''job1'\'' is too big 00:37:50.961 cpumask for '\''job2'\'' is too big 00:37:50.961 cpumask for '\''job3'\'' is too big 00:37:50.961 Running I/O for 2 seconds... 00:37:50.961 00:37:50.961 Latency(us) 00:37:50.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.961 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc0 : 2.02 15442.17 15.08 0.00 0.00 16568.87 4974.78 40751.48 00:37:50.961 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc1 : 2.02 15431.76 15.07 0.00 0.00 16559.19 5779.08 40751.48 00:37:50.961 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc0 : 2.04 15457.05 15.09 0.00 0.00 16458.15 4825.83 35746.91 00:37:50.961 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc1 : 2.04 15446.86 15.08 0.00 0.00 16451.12 5719.51 35746.91 00:37:50.961 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc0 : 2.04 15437.00 15.08 0.00 0.00 16388.33 4796.04 30980.65 00:37:50.961 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc1 : 2.04 15426.89 15.07 0.00 0.00 16379.26 5749.29 30980.65 00:37:50.961 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc0 : 2.05 15510.50 15.15 0.00 0.00 16219.99 2815.07 25976.09 00:37:50.961 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc1 : 2.05 15500.40 15.14 0.00 0.00 16211.13 2159.71 25976.09 00:37:50.961 =================================================================================================================== 00:37:50.961 Total : 123652.63 120.75 0.00 0.00 16403.80 2159.71 40751.48' 00:37:50.961 12:19:49 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-21 12:19:46.764452] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:50.961 [2024-07-21 12:19:46.764686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174203 ] 00:37:50.961 Using job config with 4 jobs 00:37:50.961 [2024-07-21 12:19:46.927738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.961 [2024-07-21 12:19:47.015657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.961 cpumask for '\''job0'\'' is too big 00:37:50.961 cpumask for '\''job1'\'' is too big 00:37:50.961 cpumask for '\''job2'\'' is too big 00:37:50.961 cpumask for '\''job3'\'' is too big 00:37:50.961 Running I/O for 2 seconds... 00:37:50.961 00:37:50.961 Latency(us) 00:37:50.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.961 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc0 : 2.02 15442.17 15.08 0.00 0.00 16568.87 4974.78 40751.48 00:37:50.961 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.961 Malloc1 : 2.02 15431.76 15.07 0.00 0.00 16559.19 5779.08 40751.48 00:37:50.962 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc0 : 2.04 15457.05 15.09 0.00 0.00 16458.15 4825.83 35746.91 00:37:50.962 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc1 : 2.04 15446.86 15.08 0.00 0.00 16451.12 5719.51 35746.91 00:37:50.962 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc0 : 2.04 15437.00 15.08 0.00 0.00 16388.33 4796.04 30980.65 00:37:50.962 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc1 : 2.04 15426.89 15.07 0.00 0.00 16379.26 5749.29 30980.65 00:37:50.962 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc0 : 2.05 15510.50 15.15 0.00 0.00 16219.99 2815.07 25976.09 00:37:50.962 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc1 : 2.05 15500.40 15.14 0.00 0.00 16211.13 2159.71 25976.09 00:37:50.962 =================================================================================================================== 00:37:50.962 Total : 123652.63 120.75 0.00 0.00 16403.80 2159.71 40751.48' 00:37:50.962 12:19:49 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-21 12:19:46.764452] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:50.962 [2024-07-21 12:19:46.764686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174203 ] 00:37:50.962 Using job config with 4 jobs 00:37:50.962 [2024-07-21 12:19:46.927738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.962 [2024-07-21 12:19:47.015657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:50.962 cpumask for '\''job0'\'' is too big 00:37:50.962 cpumask for '\''job1'\'' is too big 00:37:50.962 cpumask for '\''job2'\'' is too big 00:37:50.962 cpumask for '\''job3'\'' is too big 00:37:50.962 Running I/O for 2 seconds... 00:37:50.962 00:37:50.962 Latency(us) 00:37:50.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.962 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc0 : 2.02 15442.17 15.08 0.00 0.00 16568.87 4974.78 40751.48 00:37:50.962 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc1 : 2.02 15431.76 15.07 0.00 0.00 16559.19 5779.08 40751.48 00:37:50.962 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc0 : 2.04 15457.05 15.09 0.00 0.00 16458.15 4825.83 35746.91 00:37:50.962 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc1 : 2.04 15446.86 15.08 0.00 0.00 16451.12 5719.51 35746.91 00:37:50.962 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc0 : 2.04 15437.00 15.08 0.00 0.00 16388.33 4796.04 30980.65 00:37:50.962 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc1 : 2.04 15426.89 15.07 0.00 0.00 16379.26 5749.29 30980.65 00:37:50.962 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc0 : 2.05 15510.50 15.15 0.00 0.00 16219.99 2815.07 25976.09 00:37:50.962 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:37:50.962 Malloc1 : 2.05 15500.40 15.14 0.00 0.00 16211.13 2159.71 25976.09 00:37:50.962 =================================================================================================================== 00:37:50.962 Total : 123652.63 120.75 0.00 0.00 16403.80 2159.71 40751.48' 00:37:50.962 12:19:49 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:37:50.962 12:19:49 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:37:50.962 12:19:49 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:37:50.962 12:19:49 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:37:50.962 12:19:49 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:37:50.962 12:19:49 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:37:50.962 00:37:50.962 real 0m11.655s 00:37:50.962 user 0m9.920s 00:37:50.962 sys 0m1.146s 00:37:50.962 ************************************ 00:37:50.962 END TEST bdevperf_config 00:37:50.962 ************************************ 00:37:50.962 12:19:49 bdevperf_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:37:50.962 12:19:49 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:37:50.962 12:19:49 -- spdk/autotest.sh@192 -- # uname -s 00:37:50.962 12:19:49 -- spdk/autotest.sh@192 -- # [[ Linux == Linux ]] 00:37:50.962 12:19:49 -- spdk/autotest.sh@193 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:37:50.962 12:19:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:37:50.962 12:19:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:37:50.962 12:19:49 -- common/autotest_common.sh@10 -- # set +x 00:37:50.962 ************************************ 00:37:50.962 START TEST reactor_set_interrupt 00:37:50.962 ************************************ 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:37:50.962 * Looking for test storage... 00:37:50.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.962 12:19:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:37:50.962 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:37:50.962 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.962 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.962 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:37:50.962 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:50.962 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:37:50.962 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_CET=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:37:50.962 12:19:49 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_FC=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:37:50.963 12:19:49 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_URING=n 00:37:50.963 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:37:50.963 #define SPDK_CONFIG_H 00:37:50.963 #define SPDK_CONFIG_APPS 1 00:37:50.963 #define SPDK_CONFIG_ARCH native 00:37:50.963 #define SPDK_CONFIG_ASAN 1 00:37:50.963 #undef SPDK_CONFIG_AVAHI 00:37:50.963 #undef SPDK_CONFIG_CET 00:37:50.963 #define SPDK_CONFIG_COVERAGE 1 00:37:50.963 #define SPDK_CONFIG_CROSS_PREFIX 00:37:50.963 #undef SPDK_CONFIG_CRYPTO 00:37:50.963 #undef SPDK_CONFIG_CRYPTO_MLX5 00:37:50.963 #undef SPDK_CONFIG_CUSTOMOCF 00:37:50.963 #undef SPDK_CONFIG_DAOS 00:37:50.963 #define SPDK_CONFIG_DAOS_DIR 00:37:50.963 #define SPDK_CONFIG_DEBUG 1 00:37:50.963 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:37:50.963 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:37:50.963 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:37:50.963 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:37:50.963 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:37:50.963 #undef SPDK_CONFIG_DPDK_UADK 00:37:50.963 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:37:50.963 #define SPDK_CONFIG_EXAMPLES 1 00:37:50.963 #undef SPDK_CONFIG_FC 00:37:50.963 #define SPDK_CONFIG_FC_PATH 00:37:50.963 #define SPDK_CONFIG_FIO_PLUGIN 1 00:37:50.963 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:37:50.963 #undef SPDK_CONFIG_FUSE 00:37:50.963 #undef SPDK_CONFIG_FUZZER 00:37:50.963 #define SPDK_CONFIG_FUZZER_LIB 00:37:50.963 #undef SPDK_CONFIG_GOLANG 00:37:50.963 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:37:50.963 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:37:50.963 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:37:50.963 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:37:50.963 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:37:50.963 #undef SPDK_CONFIG_HAVE_LIBBSD 00:37:50.963 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:37:50.963 #define SPDK_CONFIG_IDXD 1 00:37:50.963 #undef SPDK_CONFIG_IDXD_KERNEL 00:37:50.963 #undef SPDK_CONFIG_IPSEC_MB 00:37:50.963 #define SPDK_CONFIG_IPSEC_MB_DIR 00:37:50.963 #define SPDK_CONFIG_ISAL 1 00:37:50.963 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:37:50.963 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:37:50.963 #define SPDK_CONFIG_LIBDIR 00:37:50.963 #undef SPDK_CONFIG_LTO 00:37:50.963 #define SPDK_CONFIG_MAX_LCORES 00:37:50.963 #define SPDK_CONFIG_NVME_CUSE 1 00:37:50.963 #undef SPDK_CONFIG_OCF 00:37:50.963 #define SPDK_CONFIG_OCF_PATH 00:37:50.963 #define SPDK_CONFIG_OPENSSL_PATH 00:37:50.963 #undef SPDK_CONFIG_PGO_CAPTURE 00:37:50.963 #define SPDK_CONFIG_PGO_DIR 00:37:50.963 #undef SPDK_CONFIG_PGO_USE 00:37:50.963 #define SPDK_CONFIG_PREFIX /usr/local 00:37:50.963 #define SPDK_CONFIG_RAID5F 1 00:37:50.963 #undef SPDK_CONFIG_RBD 00:37:50.963 #define SPDK_CONFIG_RDMA 1 00:37:50.963 #define SPDK_CONFIG_RDMA_PROV verbs 00:37:50.963 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:37:50.963 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:37:50.963 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:37:50.963 #undef SPDK_CONFIG_SHARED 00:37:50.963 #undef SPDK_CONFIG_SMA 00:37:50.963 #define SPDK_CONFIG_TESTS 1 00:37:50.963 #undef SPDK_CONFIG_TSAN 00:37:50.963 #undef SPDK_CONFIG_UBLK 00:37:50.963 #define SPDK_CONFIG_UBSAN 1 00:37:50.963 #define SPDK_CONFIG_UNIT_TESTS 1 00:37:50.963 #undef SPDK_CONFIG_URING 00:37:50.963 #define SPDK_CONFIG_URING_PATH 00:37:50.963 #undef SPDK_CONFIG_URING_ZNS 00:37:50.963 #undef SPDK_CONFIG_USDT 00:37:50.963 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:37:50.963 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:37:50.963 #undef SPDK_CONFIG_VFIO_USER 00:37:50.963 #define SPDK_CONFIG_VFIO_USER_DIR 00:37:50.963 #define SPDK_CONFIG_VHOST 1 00:37:50.963 #define SPDK_CONFIG_VIRTIO 1 00:37:50.963 #undef SPDK_CONFIG_VTUNE 00:37:50.963 #define SPDK_CONFIG_VTUNE_DIR 00:37:50.963 #define SPDK_CONFIG_WERROR 1 00:37:50.963 #define SPDK_CONFIG_WPDK_DIR 00:37:50.963 #undef SPDK_CONFIG_XNVME 00:37:50.963 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:37:50.963 12:19:49 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:37:50.963 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:50.963 12:19:49 reactor_set_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.963 12:19:49 reactor_set_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.963 12:19:49 reactor_set_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.963 12:19:49 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:50.963 12:19:49 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:50.963 12:19:49 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:50.963 12:19:49 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH 00:37:50.964 12:19:49 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@68 -- # uname -s 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]= 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E' 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]] 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:37:50.964 12:19:49 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@57 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@61 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@63 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@65 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@67 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@69 -- # : 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@71 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@73 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@75 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@77 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@79 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@81 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@83 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@85 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@87 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@89 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@91 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@93 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@95 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@97 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@99 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@101 -- # : rdma 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@103 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@105 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@107 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@109 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@111 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@113 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@115 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@117 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@119 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@121 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@125 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@127 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@129 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@131 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@133 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@135 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@137 -- # : v23.11 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@139 -- # : true 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@141 -- # : 1 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@143 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@145 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@147 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@149 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@151 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@153 -- # : 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@155 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@157 -- # : 0 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:37:50.964 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@159 -- # : 0 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@161 -- # : 0 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@163 -- # : 0 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@168 -- # : 0 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@170 -- # : 0 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@199 -- # cat 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export valgrind= 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@262 -- # valgrind= 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@268 -- # uname -s 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@278 -- # MAKE=make 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@298 -- # TEST_MODE= 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@317 -- # [[ -z 174281 ]] 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@317 -- # kill -0 174281 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@330 -- # local mount target_dir 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.mWNKCe 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.mWNKCe/tests/interrupt /tmp/spdk.mWNKCe 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@326 -- # df -T 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=1248956416 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253683200 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4726784 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=8800600064 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=20616794112 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=11799416832 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=6265020416 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=6268395520 00:37:50.965 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=5242880 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=5242880 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda15 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=103061504 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=109395968 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=6334464 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=1253675008 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253679104 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # avails["$mount"]=98338402304 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@362 -- # uses["$mount"]=1364377600 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:37:50.966 * Looking for test storage... 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@367 -- # local target_space new_size 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@371 -- # mount=/ 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@373 -- # target_space=8800600064 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ ext4 == tmpfs ]] 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ ext4 == ramfs ]] 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@380 -- # new_size=14014009344 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@388 -- # return 0 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set -o errtrace 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1683 -- # true 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # xtrace_fd 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore 00:37:50.966 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=174325 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 174325 /var/tmp/spdk.sock 00:37:51.224 12:19:49 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 174325 ']' 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:51.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:51.224 12:19:49 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:51.224 [2024-07-21 12:19:49.873978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:51.224 [2024-07-21 12:19:49.874231] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174325 ] 00:37:51.224 [2024-07-21 12:19:50.055994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:51.482 [2024-07-21 12:19:50.125489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:51.482 [2024-07-21 12:19:50.125633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:51.482 [2024-07-21 12:19:50.125637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.482 [2024-07-21 12:19:50.204489] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:52.047 12:19:50 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:52.047 12:19:50 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:37:52.047 12:19:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:37:52.047 12:19:50 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:52.611 Malloc0 00:37:52.611 Malloc1 00:37:52.611 Malloc2 00:37:52.611 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:37:52.611 12:19:51 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:37:52.611 12:19:51 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:52.611 12:19:51 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:37:52.611 5000+0 records in 00:37:52.611 5000+0 records out 00:37:52.611 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0268803 s, 381 MB/s 00:37:52.611 12:19:51 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:37:52.869 AIO0 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 174325 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 174325 without_thd 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=174325 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:52.869 12:19:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:53.140 12:19:51 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:37:53.398 spdk_thread ids are 1 on reactor0. 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174325 0 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174325 0 idle 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174325 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:53.398 12:19:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174325 -w 256 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174325 root 20 0 20.1t 80064 29152 S 0.0 0.7 0:00.34 reactor_0' 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174325 root 20 0 20.1t 80064 29152 S 0.0 0.7 0:00.34 reactor_0 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174325 1 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174325 1 idle 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174325 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:37:53.399 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174325 -w 256 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174328 root 20 0 20.1t 80064 29152 S 0.0 0.7 0:00.00 reactor_1' 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174328 root 20 0 20.1t 80064 29152 S 0.0 0.7 0:00.00 reactor_1 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174325 2 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174325 2 idle 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174325 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174325 -w 256 00:37:53.657 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174329 root 20 0 20.1t 80064 29152 S 0.0 0.7 0:00.00 reactor_2' 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174329 root 20 0 20.1t 80064 29152 S 0.0 0.7 0:00.00 reactor_2 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:37:53.914 [2024-07-21 12:19:52.763315] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:53.914 12:19:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:37:54.172 [2024-07-21 12:19:53.031037] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:37:54.172 [2024-07-21 12:19:53.031777] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:54.430 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:37:54.430 [2024-07-21 12:19:53.286955] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:37:54.430 [2024-07-21 12:19:53.287528] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 174325 0 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 174325 0 busy 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174325 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174325 -w 256 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174325 root 20 0 20.1t 80260 29152 R 93.3 0.7 0:00.78 reactor_0' 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174325 root 20 0 20.1t 80260 29152 R 93.3 0.7 0:00.78 reactor_0 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=93.3 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=93 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 93 -lt 70 ]] 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:54.688 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 174325 2 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 174325 2 busy 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174325 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174325 -w 256 00:37:54.689 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174329 root 20 0 20.1t 80260 29152 R 99.9 0.7 0:00.34 reactor_2' 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174329 root 20 0 20.1t 80260 29152 R 99.9 0.7 0:00.34 reactor_2 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:54.945 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:37:55.202 [2024-07-21 12:19:53.890936] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:37:55.203 [2024-07-21 12:19:53.891501] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 174325 2 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174325 2 idle 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174325 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174325 -w 256 00:37:55.203 12:19:53 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:55.203 12:19:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174329 root 20 0 20.1t 80316 29152 S 0.0 0.7 0:00.60 reactor_2' 00:37:55.203 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174329 root 20 0 20.1t 80316 29152 S 0.0 0.7 0:00.60 reactor_2 00:37:55.203 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:55.203 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:37:55.461 [2024-07-21 12:19:54.258901] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:37:55.461 [2024-07-21 12:19:54.259545] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:37:55.461 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:37:55.718 [2024-07-21 12:19:54.535242] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 174325 0 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174325 0 idle 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174325 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:55.718 12:19:54 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:55.719 12:19:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174325 -w 256 00:37:55.719 12:19:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174325 root 20 0 20.1t 80412 29152 S 0.0 0.7 0:01.59 reactor_0' 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174325 root 20 0 20.1t 80412 29152 S 0.0 0.7 0:01.59 reactor_0 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:37:55.976 12:19:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 174325 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 174325 ']' 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 174325 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 174325 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:37:55.976 killing process with pid 174325 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 174325' 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 174325 00:37:55.976 12:19:54 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 174325 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=174461 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:37:56.234 12:19:55 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 174461 /var/tmp/spdk.sock 00:37:56.234 12:19:55 reactor_set_interrupt -- common/autotest_common.sh@827 -- # '[' -z 174461 ']' 00:37:56.234 12:19:55 reactor_set_interrupt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.234 12:19:55 reactor_set_interrupt -- common/autotest_common.sh@832 -- # local max_retries=100 00:37:56.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.234 12:19:55 reactor_set_interrupt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.234 12:19:55 reactor_set_interrupt -- common/autotest_common.sh@836 -- # xtrace_disable 00:37:56.235 12:19:55 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:37:56.235 [2024-07-21 12:19:55.093878] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:37:56.235 [2024-07-21 12:19:55.094121] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174461 ] 00:37:56.493 [2024-07-21 12:19:55.271090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:56.493 [2024-07-21 12:19:55.333710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.493 [2024-07-21 12:19:55.333838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.493 [2024-07-21 12:19:55.333838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:37:56.750 [2024-07-21 12:19:55.413917] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:57.316 12:19:56 reactor_set_interrupt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:37:57.316 12:19:56 reactor_set_interrupt -- common/autotest_common.sh@860 -- # return 0 00:37:57.316 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:37:57.316 12:19:56 reactor_set_interrupt -- interrupt/common.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:57.574 Malloc0 00:37:57.574 Malloc1 00:37:57.574 Malloc2 00:37:57.574 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:37:57.574 12:19:56 reactor_set_interrupt -- interrupt/common.sh@75 -- # uname -s 00:37:57.574 12:19:56 reactor_set_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:37:57.574 12:19:56 reactor_set_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:37:57.574 5000+0 records in 00:37:57.574 5000+0 records out 00:37:57.574 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0243921 s, 420 MB/s 00:37:57.574 12:19:56 reactor_set_interrupt -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:37:57.832 AIO0 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 174461 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 174461 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=174461 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x1 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:57.832 12:19:56 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=1 00:37:57.833 12:19:56 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:57.833 12:19:56 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:57.833 12:19:56 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo 1 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/common.sh@55 -- # local reactor_cpumask=0x4 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/common.sh@56 -- # local grep_str 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/common.sh@58 -- # reactor_cpumask=4 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/common.sh@59 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/common.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:37:58.105 12:19:56 reactor_set_interrupt -- interrupt/common.sh@62 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@62 -- # echo '' 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:37:58.406 spdk_thread ids are 1 on reactor0. 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174461 0 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174461 0 idle 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174461 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174461 -w 256 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174461 root 20 0 20.1t 80068 29152 S 0.0 0.7 0:00.32 reactor_0' 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174461 root 20 0 20.1t 80068 29152 S 0.0 0.7 0:00.32 reactor_0 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174461 1 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174461 1 idle 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174461 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174461 -w 256 00:37:58.406 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174476 root 20 0 20.1t 80068 29152 S 0.0 0.7 0:00.00 reactor_1' 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174476 root 20 0 20.1t 80068 29152 S 0.0 0.7 0:00.00 reactor_1 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 174461 2 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174461 2 idle 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174461 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174461 -w 256 00:37:58.670 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:58.927 12:19:57 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174477 root 20 0 20.1t 80068 29152 S 0.0 0.7 0:00.00 reactor_2' 00:37:58.927 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174477 root 20 0 20.1t 80068 29152 S 0.0 0.7 0:00.00 reactor_2 00:37:58.927 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:58.927 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:58.927 12:19:57 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:58.927 12:19:57 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:58.927 12:19:57 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:58.928 12:19:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:58.928 12:19:57 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:58.928 12:19:57 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:58.928 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:37:58.928 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:37:58.928 [2024-07-21 12:19:57.743585] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:37:58.928 [2024-07-21 12:19:57.743830] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:37:58.928 [2024-07-21 12:19:57.744145] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:58.928 12:19:57 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:37:59.185 [2024-07-21 12:19:58.019519] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:37:59.185 [2024-07-21 12:19:58.019907] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 174461 0 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 174461 0 busy 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174461 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174461 -w 256 00:37:59.185 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174461 root 20 0 20.1t 80240 29152 R 99.9 0.7 0:00.78 reactor_0' 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174461 root 20 0 20.1t 80240 29152 R 99.9 0.7 0:00.78 reactor_0 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 174461 2 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 174461 2 busy 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174461 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:59.443 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174461 -w 256 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174477 root 20 0 20.1t 80240 29152 R 99.9 0.7 0:00.34 reactor_2' 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174477 root 20 0 20.1t 80240 29152 R 99.9 0.7 0:00.34 reactor_2 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:59.701 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:37:59.959 [2024-07-21 12:19:58.627904] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:37:59.959 [2024-07-21 12:19:58.628118] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 174461 2 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174461 2 idle 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174461 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174461 -w 256 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_2 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174477 root 20 0 20.1t 80284 29152 S 0.0 0.7 0:00.60 reactor_2' 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174477 root 20 0 20.1t 80284 29152 S 0.0 0.7 0:00.60 reactor_2 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:37:59.959 12:19:58 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:38:00.216 [2024-07-21 12:19:59.051926] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:38:00.216 [2024-07-21 12:19:59.052299] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:38:00.216 [2024-07-21 12:19:59.052364] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 174461 0 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 174461 0 idle 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=174461 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@18 -- # hash top 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 174461 -w 256 00:38:00.216 12:19:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@24 -- # top_reactor=' 174461 root 20 0 20.1t 80340 29152 S 6.7 0.7 0:01.65 reactor_0' 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # echo 174461 root 20 0 20.1t 80340 29152 S 6.7 0.7 0:01.65 reactor_0 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@25 -- # cpu_rate=6.7 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@26 -- # cpu_rate=6 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ 6 -gt 30 ]] 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/common.sh@33 -- # return 0 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:38:00.473 12:19:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 174461 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@946 -- # '[' -z 174461 ']' 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@950 -- # kill -0 174461 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@951 -- # uname 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 174461 00:38:00.473 killing process with pid 174461 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 174461' 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@965 -- # kill 174461 00:38:00.473 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@970 -- # wait 174461 00:38:01.042 12:19:59 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:38:01.042 12:19:59 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:01.042 00:38:01.042 real 0m10.004s 00:38:01.042 user 0m9.892s 00:38:01.042 sys 0m1.556s 00:38:01.042 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:01.042 12:19:59 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x 00:38:01.042 ************************************ 00:38:01.042 END TEST reactor_set_interrupt 00:38:01.042 ************************************ 00:38:01.042 12:19:59 -- spdk/autotest.sh@194 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:01.042 12:19:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:01.042 12:19:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:01.042 12:19:59 -- common/autotest_common.sh@10 -- # set +x 00:38:01.042 ************************************ 00:38:01.042 START TEST reap_unregistered_poller 00:38:01.042 ************************************ 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:01.042 * Looking for test storage... 00:38:01.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.042 12:19:59 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:38:01.042 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:38:01.042 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.042 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.042 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:38:01.042 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:01.042 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:38:01.042 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_CET=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_UBLK=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:38:01.042 12:19:59 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_SHARED=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_FC=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_RAID5F=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:38:01.043 12:19:59 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_URING=n 00:38:01.043 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:38:01.043 #define SPDK_CONFIG_H 00:38:01.043 #define SPDK_CONFIG_APPS 1 00:38:01.043 #define SPDK_CONFIG_ARCH native 00:38:01.043 #define SPDK_CONFIG_ASAN 1 00:38:01.043 #undef SPDK_CONFIG_AVAHI 00:38:01.043 #undef SPDK_CONFIG_CET 00:38:01.043 #define SPDK_CONFIG_COVERAGE 1 00:38:01.043 #define SPDK_CONFIG_CROSS_PREFIX 00:38:01.043 #undef SPDK_CONFIG_CRYPTO 00:38:01.043 #undef SPDK_CONFIG_CRYPTO_MLX5 00:38:01.043 #undef SPDK_CONFIG_CUSTOMOCF 00:38:01.043 #undef SPDK_CONFIG_DAOS 00:38:01.043 #define SPDK_CONFIG_DAOS_DIR 00:38:01.043 #define SPDK_CONFIG_DEBUG 1 00:38:01.043 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:38:01.043 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:38:01.043 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:38:01.043 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:38:01.043 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:38:01.043 #undef SPDK_CONFIG_DPDK_UADK 00:38:01.043 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:38:01.043 #define SPDK_CONFIG_EXAMPLES 1 00:38:01.043 #undef SPDK_CONFIG_FC 00:38:01.043 #define SPDK_CONFIG_FC_PATH 00:38:01.043 #define SPDK_CONFIG_FIO_PLUGIN 1 00:38:01.043 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:38:01.043 #undef SPDK_CONFIG_FUSE 00:38:01.043 #undef SPDK_CONFIG_FUZZER 00:38:01.043 #define SPDK_CONFIG_FUZZER_LIB 00:38:01.043 #undef SPDK_CONFIG_GOLANG 00:38:01.043 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:38:01.043 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:38:01.043 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:38:01.043 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:38:01.043 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:38:01.043 #undef SPDK_CONFIG_HAVE_LIBBSD 00:38:01.043 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:38:01.043 #define SPDK_CONFIG_IDXD 1 00:38:01.043 #undef SPDK_CONFIG_IDXD_KERNEL 00:38:01.043 #undef SPDK_CONFIG_IPSEC_MB 00:38:01.043 #define SPDK_CONFIG_IPSEC_MB_DIR 00:38:01.043 #define SPDK_CONFIG_ISAL 1 00:38:01.043 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:38:01.043 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:38:01.043 #define SPDK_CONFIG_LIBDIR 00:38:01.043 #undef SPDK_CONFIG_LTO 00:38:01.043 #define SPDK_CONFIG_MAX_LCORES 00:38:01.043 #define SPDK_CONFIG_NVME_CUSE 1 00:38:01.043 #undef SPDK_CONFIG_OCF 00:38:01.043 #define SPDK_CONFIG_OCF_PATH 00:38:01.043 #define SPDK_CONFIG_OPENSSL_PATH 00:38:01.043 #undef SPDK_CONFIG_PGO_CAPTURE 00:38:01.043 #define SPDK_CONFIG_PGO_DIR 00:38:01.043 #undef SPDK_CONFIG_PGO_USE 00:38:01.043 #define SPDK_CONFIG_PREFIX /usr/local 00:38:01.043 #define SPDK_CONFIG_RAID5F 1 00:38:01.043 #undef SPDK_CONFIG_RBD 00:38:01.043 #define SPDK_CONFIG_RDMA 1 00:38:01.043 #define SPDK_CONFIG_RDMA_PROV verbs 00:38:01.043 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:38:01.043 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:38:01.043 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:38:01.043 #undef SPDK_CONFIG_SHARED 00:38:01.043 #undef SPDK_CONFIG_SMA 00:38:01.043 #define SPDK_CONFIG_TESTS 1 00:38:01.043 #undef SPDK_CONFIG_TSAN 00:38:01.043 #undef SPDK_CONFIG_UBLK 00:38:01.043 #define SPDK_CONFIG_UBSAN 1 00:38:01.043 #define SPDK_CONFIG_UNIT_TESTS 1 00:38:01.043 #undef SPDK_CONFIG_URING 00:38:01.043 #define SPDK_CONFIG_URING_PATH 00:38:01.043 #undef SPDK_CONFIG_URING_ZNS 00:38:01.043 #undef SPDK_CONFIG_USDT 00:38:01.043 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:38:01.043 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:38:01.043 #undef SPDK_CONFIG_VFIO_USER 00:38:01.043 #define SPDK_CONFIG_VFIO_USER_DIR 00:38:01.043 #define SPDK_CONFIG_VHOST 1 00:38:01.043 #define SPDK_CONFIG_VIRTIO 1 00:38:01.043 #undef SPDK_CONFIG_VTUNE 00:38:01.043 #define SPDK_CONFIG_VTUNE_DIR 00:38:01.043 #define SPDK_CONFIG_WERROR 1 00:38:01.043 #define SPDK_CONFIG_WPDK_DIR 00:38:01.043 #undef SPDK_CONFIG_XNVME 00:38:01.043 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:38:01.043 12:19:59 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:38:01.043 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:01.043 12:19:59 reap_unregistered_poller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:01.043 12:19:59 reap_unregistered_poller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:01.043 12:19:59 reap_unregistered_poller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:01.043 12:19:59 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.043 12:19:59 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.043 12:19:59 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.043 12:19:59 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH 00:38:01.043 12:19:59 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:01.043 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:01.043 12:19:59 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@68 -- # uname -s 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]= 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E' 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]] 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:38:01.044 12:19:59 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@57 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@61 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@63 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@65 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@67 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@69 -- # : 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@71 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@73 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@75 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@77 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@79 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@81 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@83 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@85 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@87 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@89 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@91 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@93 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@95 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@97 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@99 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@101 -- # : rdma 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@103 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@105 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@107 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@109 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@111 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@113 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@115 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@117 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@119 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@121 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@123 -- # : /home/vagrant/spdk_repo/dpdk/build 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@125 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@127 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@129 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@131 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@133 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@135 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@137 -- # : v23.11 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@139 -- # : true 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@141 -- # : 1 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@143 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@145 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@147 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@149 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@151 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@153 -- # : 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@155 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@157 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@159 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@161 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@163 -- # : 0 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:38:01.044 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@168 -- # : 0 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@170 -- # : 0 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@199 -- # cat 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@252 -- # export QEMU_BIN= 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@252 -- # QEMU_BIN= 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@253 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export valgrind= 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@262 -- # valgrind= 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@268 -- # uname -s 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@278 -- # MAKE=make 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@298 -- # TEST_MODE= 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@317 -- # [[ -z 174632 ]] 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@317 -- # kill -0 174632 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@330 -- # local mount target_dir 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.Atge3u 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.Atge3u/tests/interrupt /tmp/spdk.Atge3u 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@326 -- # df -T 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=1248956416 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253683200 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4726784 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda1 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=8800555008 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=20616794112 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=11799461888 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=6265020416 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=6268395520 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=5242880 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=5242880 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda15 00:38:01.045 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=103061504 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=109395968 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=6334464 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=1253675008 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253679104 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # avails["$mount"]=98338295808 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@362 -- # uses["$mount"]=1364484096 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:38:01.046 * Looking for test storage... 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@367 -- # local target_space new_size 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@371 -- # mount=/ 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@373 -- # target_space=8800555008 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ ext4 == tmpfs ]] 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ ext4 == ramfs ]] 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@380 -- # new_size=14014054400 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@388 -- # return 0 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set -o errtrace 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1683 -- # true 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # xtrace_fd 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=174676 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 174676 /var/tmp/spdk.sock 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@827 -- # '[' -z 174676 ']' 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.046 12:19:59 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@832 -- # local max_retries=100 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@836 -- # xtrace_disable 00:38:01.046 12:19:59 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:01.304 [2024-07-21 12:19:59.927967] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:01.304 [2024-07-21 12:19:59.928208] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174676 ] 00:38:01.304 [2024-07-21 12:20:00.112500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:01.561 [2024-07-21 12:20:00.192274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.561 [2024-07-21 12:20:00.192416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:01.561 [2024-07-21 12:20:00.192417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:38:01.561 [2024-07-21 12:20:00.287679] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:02.126 12:20:00 reap_unregistered_poller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:38:02.126 12:20:00 reap_unregistered_poller -- common/autotest_common.sh@860 -- # return 0 00:38:02.126 12:20:00 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:38:02.126 12:20:00 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.126 12:20:00 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:02.126 12:20:00 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:38:02.126 12:20:00 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:02.126 12:20:00 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:38:02.126 "name": "app_thread", 00:38:02.126 "id": 1, 00:38:02.126 "active_pollers": [], 00:38:02.126 "timed_pollers": [ 00:38:02.126 { 00:38:02.126 "name": "rpc_subsystem_poll_servers", 00:38:02.126 "id": 1, 00:38:02.126 "state": "waiting", 00:38:02.126 "run_count": 0, 00:38:02.126 "busy_count": 0, 00:38:02.126 "period_ticks": 8800000 00:38:02.126 } 00:38:02.126 ], 00:38:02.126 "paused_pollers": [] 00:38:02.126 }' 00:38:02.126 12:20:00 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:38:02.383 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:38:02.383 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:38:02.383 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:38:02.384 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:38:02.384 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:38:02.384 12:20:01 reap_unregistered_poller -- interrupt/common.sh@75 -- # uname -s 00:38:02.384 12:20:01 reap_unregistered_poller -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:38:02.384 12:20:01 reap_unregistered_poller -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:38:02.384 5000+0 records in 00:38:02.384 5000+0 records out 00:38:02.384 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0272544 s, 376 MB/s 00:38:02.384 12:20:01 reap_unregistered_poller -- interrupt/common.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:38:02.641 AIO0 00:38:02.641 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:02.899 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:38:02.899 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:38:02.899 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:38:02.899 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@559 -- # xtrace_disable 00:38:02.899 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:38:03.157 "name": "app_thread", 00:38:03.157 "id": 1, 00:38:03.157 "active_pollers": [], 00:38:03.157 "timed_pollers": [ 00:38:03.157 { 00:38:03.157 "name": "rpc_subsystem_poll_servers", 00:38:03.157 "id": 1, 00:38:03.157 "state": "waiting", 00:38:03.157 "run_count": 0, 00:38:03.157 "busy_count": 0, 00:38:03.157 "period_ticks": 8800000 00:38:03.157 } 00:38:03.157 ], 00:38:03.157 "paused_pollers": [] 00:38:03.157 }' 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:38:03.157 12:20:01 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 174676 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@946 -- # '[' -z 174676 ']' 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@950 -- # kill -0 174676 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@951 -- # uname 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 174676 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:38:03.157 killing process with pid 174676 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 174676' 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@965 -- # kill 174676 00:38:03.157 12:20:01 reap_unregistered_poller -- common/autotest_common.sh@970 -- # wait 174676 00:38:03.415 12:20:02 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:38:03.415 12:20:02 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:38:03.415 00:38:03.415 real 0m2.588s 00:38:03.415 user 0m1.762s 00:38:03.415 sys 0m0.528s 00:38:03.415 12:20:02 reap_unregistered_poller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:03.415 12:20:02 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x 00:38:03.415 ************************************ 00:38:03.415 END TEST reap_unregistered_poller 00:38:03.415 ************************************ 00:38:03.674 12:20:02 -- spdk/autotest.sh@198 -- # uname -s 00:38:03.674 12:20:02 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:38:03.674 12:20:02 -- spdk/autotest.sh@199 -- # [[ 1 -eq 1 ]] 00:38:03.674 12:20:02 -- spdk/autotest.sh@205 -- # [[ 0 -eq 0 ]] 00:38:03.674 12:20:02 -- spdk/autotest.sh@206 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:03.674 12:20:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:03.674 12:20:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:03.674 12:20:02 -- common/autotest_common.sh@10 -- # set +x 00:38:03.674 ************************************ 00:38:03.674 START TEST spdk_dd 00:38:03.674 ************************************ 00:38:03.674 12:20:02 spdk_dd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:38:03.674 * Looking for test storage... 00:38:03.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:03.674 12:20:02 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:03.674 12:20:02 spdk_dd -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:03.674 12:20:02 spdk_dd -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:03.674 12:20:02 spdk_dd -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:03.674 12:20:02 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:03.674 12:20:02 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:03.674 12:20:02 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:03.674 12:20:02 spdk_dd -- paths/export.sh@5 -- # export PATH 00:38:03.674 12:20:02 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:03.674 12:20:02 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:38:03.932 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:38:03.932 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:38:05.311 12:20:03 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:38:05.311 12:20:03 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@309 -- # local bdf bdfs 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@310 -- # local nvmes 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@312 -- # [[ -n '' ]] 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@295 -- # local bdf= 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@230 -- # local class 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@231 -- # local subclass 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@232 -- # local progif 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@233 -- # printf %02x 1 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@233 -- # class=01 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@234 -- # printf %02x 8 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@234 -- # subclass=08 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@235 -- # printf %02x 2 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@235 -- # progif=02 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@237 -- # hash lspci 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@239 -- # lspci -mm -n -D 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@240 -- # grep -i -- -p02 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@242 -- # tr -d '"' 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@15 -- # local i 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@22 -- # [[ -z '' ]] 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@24 -- # return 0 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@320 -- # uname -s 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@325 -- # (( 1 )) 00:38:05.311 12:20:03 spdk_dd -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:38:05.311 12:20:03 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@139 -- # local lib so 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:38:05.311 12:20:03 spdk_dd -- dd/common.sh@142 -- # read -r lib _ so _ 00:38:05.311 12:20:03 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:38:05.311 12:20:03 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:38:05.311 12:20:03 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:05.311 12:20:03 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:05.311 12:20:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:05.311 ************************************ 00:38:05.311 START TEST spdk_dd_basic_rw 00:38:05.311 ************************************ 00:38:05.311 12:20:03 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 00:38:05.311 * Looking for test storage... 00:38:05.311 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:05.311 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:05.311 12:20:03 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:05.311 12:20:03 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:05.311 12:20:03 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:05.311 12:20:03 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:05.311 12:20:03 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:38:05.312 12:20:03 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:38:05.572 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2344 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:38:05.572 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 109 Data Units Written: 7 Host Read Commands: 2344 Host Write Commands: 111 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:05.573 ************************************ 00:38:05.573 START TEST dd_bs_lt_native_bs 00:38:05.573 ************************************ 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1121 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@648 -- # local es=0 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:05.573 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:38:05.573 { 00:38:05.573 "subsystems": [ 00:38:05.573 { 00:38:05.573 "subsystem": "bdev", 00:38:05.573 "config": [ 00:38:05.573 { 00:38:05.573 "params": { 00:38:05.573 "trtype": "pcie", 00:38:05.573 "traddr": "0000:00:10.0", 00:38:05.573 "name": "Nvme0" 00:38:05.573 }, 00:38:05.573 "method": "bdev_nvme_attach_controller" 00:38:05.573 }, 00:38:05.573 { 00:38:05.573 "method": "bdev_wait_for_examine" 00:38:05.573 } 00:38:05.573 ] 00:38:05.573 } 00:38:05.573 ] 00:38:05.573 } 00:38:05.573 [2024-07-21 12:20:04.289139] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:05.573 [2024-07-21 12:20:04.289942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174979 ] 00:38:05.830 [2024-07-21 12:20:04.455935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:05.830 [2024-07-21 12:20:04.521069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:05.830 [2024-07-21 12:20:04.676445] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:38:05.830 [2024-07-21 12:20:04.676568] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:06.088 [2024-07-21 12:20:04.797741] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@651 -- # es=234 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@660 -- # es=106 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # case "$es" in 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@668 -- # es=1 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:06.088 00:38:06.088 real 0m0.703s 00:38:06.088 user 0m0.426s 00:38:06.088 sys 0m0.238s 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:06.088 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:38:06.088 ************************************ 00:38:06.088 END TEST dd_bs_lt_native_bs 00:38:06.088 ************************************ 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:06.347 ************************************ 00:38:06.347 START TEST dd_rw 00:38:06.347 ************************************ 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1121 -- # basic_rw 4096 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:06.347 12:20:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:06.913 12:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:38:06.913 12:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:06.913 12:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:06.913 12:20:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:06.913 { 00:38:06.913 "subsystems": [ 00:38:06.913 { 00:38:06.913 "subsystem": "bdev", 00:38:06.913 "config": [ 00:38:06.913 { 00:38:06.913 "params": { 00:38:06.913 "trtype": "pcie", 00:38:06.913 "traddr": "0000:00:10.0", 00:38:06.913 "name": "Nvme0" 00:38:06.913 }, 00:38:06.913 "method": "bdev_nvme_attach_controller" 00:38:06.913 }, 00:38:06.913 { 00:38:06.913 "method": "bdev_wait_for_examine" 00:38:06.913 } 00:38:06.913 ] 00:38:06.913 } 00:38:06.913 ] 00:38:06.913 } 00:38:06.913 [2024-07-21 12:20:05.602997] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:06.913 [2024-07-21 12:20:05.603231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175027 ] 00:38:06.913 [2024-07-21 12:20:05.768073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.170 [2024-07-21 12:20:05.826001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.428  Copying: 60/60 [kB] (average 19 MBps) 00:38:07.428 00:38:07.428 12:20:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:38:07.428 12:20:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:07.428 12:20:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:07.428 12:20:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:07.686 [2024-07-21 12:20:06.312721] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:07.686 [2024-07-21 12:20:06.312948] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175035 ] 00:38:07.686 { 00:38:07.686 "subsystems": [ 00:38:07.686 { 00:38:07.686 "subsystem": "bdev", 00:38:07.686 "config": [ 00:38:07.686 { 00:38:07.686 "params": { 00:38:07.686 "trtype": "pcie", 00:38:07.686 "traddr": "0000:00:10.0", 00:38:07.686 "name": "Nvme0" 00:38:07.686 }, 00:38:07.686 "method": "bdev_nvme_attach_controller" 00:38:07.686 }, 00:38:07.686 { 00:38:07.686 "method": "bdev_wait_for_examine" 00:38:07.686 } 00:38:07.686 ] 00:38:07.686 } 00:38:07.686 ] 00:38:07.686 } 00:38:07.686 [2024-07-21 12:20:06.479197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.944 [2024-07-21 12:20:06.565390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.202  Copying: 60/60 [kB] (average 19 MBps) 00:38:08.202 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:08.202 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:08.459 [2024-07-21 12:20:07.074757] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:08.459 [2024-07-21 12:20:07.074985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175056 ] 00:38:08.459 { 00:38:08.459 "subsystems": [ 00:38:08.459 { 00:38:08.459 "subsystem": "bdev", 00:38:08.459 "config": [ 00:38:08.459 { 00:38:08.459 "params": { 00:38:08.459 "trtype": "pcie", 00:38:08.459 "traddr": "0000:00:10.0", 00:38:08.459 "name": "Nvme0" 00:38:08.459 }, 00:38:08.459 "method": "bdev_nvme_attach_controller" 00:38:08.459 }, 00:38:08.459 { 00:38:08.459 "method": "bdev_wait_for_examine" 00:38:08.459 } 00:38:08.459 ] 00:38:08.459 } 00:38:08.459 ] 00:38:08.459 } 00:38:08.459 [2024-07-21 12:20:07.243182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:08.459 [2024-07-21 12:20:07.306515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:08.974  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:08.974 00:38:08.974 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:08.974 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:38:08.974 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:38:08.974 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:38:08.974 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:38:08.974 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:08.974 12:20:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:09.539 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:38:09.539 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:09.539 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:09.539 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:09.539 { 00:38:09.539 "subsystems": [ 00:38:09.539 { 00:38:09.539 "subsystem": "bdev", 00:38:09.539 "config": [ 00:38:09.539 { 00:38:09.539 "params": { 00:38:09.539 "trtype": "pcie", 00:38:09.539 "traddr": "0000:00:10.0", 00:38:09.539 "name": "Nvme0" 00:38:09.539 }, 00:38:09.539 "method": "bdev_nvme_attach_controller" 00:38:09.539 }, 00:38:09.539 { 00:38:09.539 "method": "bdev_wait_for_examine" 00:38:09.539 } 00:38:09.539 ] 00:38:09.539 } 00:38:09.539 ] 00:38:09.539 } 00:38:09.539 [2024-07-21 12:20:08.325516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:09.539 [2024-07-21 12:20:08.325747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175075 ] 00:38:09.796 [2024-07-21 12:20:08.494620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:09.796 [2024-07-21 12:20:08.559478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:10.313  Copying: 60/60 [kB] (average 58 MBps) 00:38:10.313 00:38:10.313 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:38:10.313 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:10.313 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:10.313 12:20:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:10.313 { 00:38:10.313 "subsystems": [ 00:38:10.313 { 00:38:10.313 "subsystem": "bdev", 00:38:10.313 "config": [ 00:38:10.313 { 00:38:10.313 "params": { 00:38:10.313 "trtype": "pcie", 00:38:10.313 "traddr": "0000:00:10.0", 00:38:10.313 "name": "Nvme0" 00:38:10.313 }, 00:38:10.313 "method": "bdev_nvme_attach_controller" 00:38:10.313 }, 00:38:10.313 { 00:38:10.313 "method": "bdev_wait_for_examine" 00:38:10.313 } 00:38:10.313 ] 00:38:10.313 } 00:38:10.313 ] 00:38:10.313 } 00:38:10.313 [2024-07-21 12:20:09.051935] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:10.313 [2024-07-21 12:20:09.052172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175091 ] 00:38:10.571 [2024-07-21 12:20:09.219431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:10.571 [2024-07-21 12:20:09.278775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.135  Copying: 60/60 [kB] (average 58 MBps) 00:38:11.135 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:11.135 12:20:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:11.135 { 00:38:11.135 "subsystems": [ 00:38:11.135 { 00:38:11.135 "subsystem": "bdev", 00:38:11.135 "config": [ 00:38:11.135 { 00:38:11.135 "params": { 00:38:11.135 "trtype": "pcie", 00:38:11.135 "traddr": "0000:00:10.0", 00:38:11.135 "name": "Nvme0" 00:38:11.135 }, 00:38:11.135 "method": "bdev_nvme_attach_controller" 00:38:11.135 }, 00:38:11.135 { 00:38:11.135 "method": "bdev_wait_for_examine" 00:38:11.135 } 00:38:11.135 ] 00:38:11.135 } 00:38:11.135 ] 00:38:11.135 } 00:38:11.135 [2024-07-21 12:20:09.793784] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:11.135 [2024-07-21 12:20:09.794030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175111 ] 00:38:11.135 [2024-07-21 12:20:09.959620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:11.394 [2024-07-21 12:20:10.028897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:11.652  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:11.652 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:11.652 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:12.218 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:38:12.218 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:12.218 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:12.218 12:20:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:12.218 [2024-07-21 12:20:10.988905] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:12.218 { 00:38:12.218 "subsystems": [ 00:38:12.218 { 00:38:12.218 "subsystem": "bdev", 00:38:12.218 "config": [ 00:38:12.218 { 00:38:12.218 "params": { 00:38:12.218 "trtype": "pcie", 00:38:12.218 "traddr": "0000:00:10.0", 00:38:12.218 "name": "Nvme0" 00:38:12.218 }, 00:38:12.218 "method": "bdev_nvme_attach_controller" 00:38:12.218 }, 00:38:12.218 { 00:38:12.218 "method": "bdev_wait_for_examine" 00:38:12.218 } 00:38:12.218 ] 00:38:12.218 } 00:38:12.218 ] 00:38:12.218 } 00:38:12.218 [2024-07-21 12:20:10.989195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175131 ] 00:38:12.476 [2024-07-21 12:20:11.155752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.476 [2024-07-21 12:20:11.207430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.993  Copying: 56/56 [kB] (average 27 MBps) 00:38:12.993 00:38:12.993 12:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:38:12.993 12:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:12.993 12:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:12.993 12:20:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:12.993 { 00:38:12.993 "subsystems": [ 00:38:12.993 { 00:38:12.993 "subsystem": "bdev", 00:38:12.993 "config": [ 00:38:12.993 { 00:38:12.993 "params": { 00:38:12.993 "trtype": "pcie", 00:38:12.993 "traddr": "0000:00:10.0", 00:38:12.993 "name": "Nvme0" 00:38:12.993 }, 00:38:12.993 "method": "bdev_nvme_attach_controller" 00:38:12.994 }, 00:38:12.994 { 00:38:12.994 "method": "bdev_wait_for_examine" 00:38:12.994 } 00:38:12.994 ] 00:38:12.994 } 00:38:12.994 ] 00:38:12.994 } 00:38:12.994 [2024-07-21 12:20:11.693775] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:12.994 [2024-07-21 12:20:11.694012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175147 ] 00:38:12.994 [2024-07-21 12:20:11.860201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:13.252 [2024-07-21 12:20:11.922231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.510  Copying: 56/56 [kB] (average 27 MBps) 00:38:13.510 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:13.510 12:20:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:13.768 [2024-07-21 12:20:12.424909] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:13.768 [2024-07-21 12:20:12.425689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175162 ] 00:38:13.768 { 00:38:13.768 "subsystems": [ 00:38:13.768 { 00:38:13.768 "subsystem": "bdev", 00:38:13.768 "config": [ 00:38:13.768 { 00:38:13.768 "params": { 00:38:13.768 "trtype": "pcie", 00:38:13.768 "traddr": "0000:00:10.0", 00:38:13.768 "name": "Nvme0" 00:38:13.768 }, 00:38:13.768 "method": "bdev_nvme_attach_controller" 00:38:13.768 }, 00:38:13.768 { 00:38:13.768 "method": "bdev_wait_for_examine" 00:38:13.768 } 00:38:13.768 ] 00:38:13.768 } 00:38:13.768 ] 00:38:13.768 } 00:38:13.768 [2024-07-21 12:20:12.591350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.026 [2024-07-21 12:20:12.642619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.291  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:14.291 00:38:14.291 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:14.291 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:38:14.291 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:38:14.291 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:38:14.291 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:38:14.291 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:14.291 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:14.859 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:38:14.859 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:14.859 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:14.859 12:20:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:14.859 [2024-07-21 12:20:13.687273] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:14.859 [2024-07-21 12:20:13.687502] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175189 ] 00:38:14.859 { 00:38:14.859 "subsystems": [ 00:38:14.859 { 00:38:14.859 "subsystem": "bdev", 00:38:14.859 "config": [ 00:38:14.859 { 00:38:14.859 "params": { 00:38:14.859 "trtype": "pcie", 00:38:14.859 "traddr": "0000:00:10.0", 00:38:14.859 "name": "Nvme0" 00:38:14.859 }, 00:38:14.859 "method": "bdev_nvme_attach_controller" 00:38:14.859 }, 00:38:14.859 { 00:38:14.859 "method": "bdev_wait_for_examine" 00:38:14.859 } 00:38:14.859 ] 00:38:14.859 } 00:38:14.859 ] 00:38:14.859 } 00:38:15.117 [2024-07-21 12:20:13.855204] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.117 [2024-07-21 12:20:13.934280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:15.632  Copying: 56/56 [kB] (average 54 MBps) 00:38:15.632 00:38:15.632 12:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:38:15.632 12:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:15.632 12:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:15.632 12:20:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:15.632 { 00:38:15.632 "subsystems": [ 00:38:15.632 { 00:38:15.632 "subsystem": "bdev", 00:38:15.632 "config": [ 00:38:15.632 { 00:38:15.632 "params": { 00:38:15.632 "trtype": "pcie", 00:38:15.632 "traddr": "0000:00:10.0", 00:38:15.632 "name": "Nvme0" 00:38:15.632 }, 00:38:15.632 "method": "bdev_nvme_attach_controller" 00:38:15.632 }, 00:38:15.632 { 00:38:15.632 "method": "bdev_wait_for_examine" 00:38:15.632 } 00:38:15.632 ] 00:38:15.632 } 00:38:15.632 ] 00:38:15.632 } 00:38:15.632 [2024-07-21 12:20:14.440570] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:15.632 [2024-07-21 12:20:14.440801] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175198 ] 00:38:15.890 [2024-07-21 12:20:14.608499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:15.890 [2024-07-21 12:20:14.675581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:16.406  Copying: 56/56 [kB] (average 54 MBps) 00:38:16.406 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:16.406 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:16.406 { 00:38:16.406 "subsystems": [ 00:38:16.406 { 00:38:16.406 "subsystem": "bdev", 00:38:16.406 "config": [ 00:38:16.406 { 00:38:16.406 "params": { 00:38:16.406 "trtype": "pcie", 00:38:16.406 "traddr": "0000:00:10.0", 00:38:16.406 "name": "Nvme0" 00:38:16.406 }, 00:38:16.406 "method": "bdev_nvme_attach_controller" 00:38:16.406 }, 00:38:16.406 { 00:38:16.406 "method": "bdev_wait_for_examine" 00:38:16.406 } 00:38:16.406 ] 00:38:16.406 } 00:38:16.406 ] 00:38:16.406 } 00:38:16.406 [2024-07-21 12:20:15.193918] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:16.406 [2024-07-21 12:20:15.194162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175218 ] 00:38:16.664 [2024-07-21 12:20:15.359583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:16.664 [2024-07-21 12:20:15.422638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:17.178  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:17.178 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:17.178 12:20:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:17.435 12:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:38:17.435 12:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:17.435 12:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:17.435 12:20:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:17.693 { 00:38:17.693 "subsystems": [ 00:38:17.693 { 00:38:17.693 "subsystem": "bdev", 00:38:17.693 "config": [ 00:38:17.693 { 00:38:17.693 "params": { 00:38:17.693 "trtype": "pcie", 00:38:17.693 "traddr": "0000:00:10.0", 00:38:17.693 "name": "Nvme0" 00:38:17.693 }, 00:38:17.693 "method": "bdev_nvme_attach_controller" 00:38:17.693 }, 00:38:17.693 { 00:38:17.693 "method": "bdev_wait_for_examine" 00:38:17.693 } 00:38:17.693 ] 00:38:17.693 } 00:38:17.693 ] 00:38:17.693 } 00:38:17.693 [2024-07-21 12:20:16.341850] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:17.693 [2024-07-21 12:20:16.342120] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175234 ] 00:38:17.693 [2024-07-21 12:20:16.507743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:17.951 [2024-07-21 12:20:16.568993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.209  Copying: 48/48 [kB] (average 46 MBps) 00:38:18.209 00:38:18.209 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:38:18.209 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:18.209 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:18.209 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:18.468 { 00:38:18.468 "subsystems": [ 00:38:18.468 { 00:38:18.468 "subsystem": "bdev", 00:38:18.468 "config": [ 00:38:18.468 { 00:38:18.468 "params": { 00:38:18.468 "trtype": "pcie", 00:38:18.468 "traddr": "0000:00:10.0", 00:38:18.468 "name": "Nvme0" 00:38:18.468 }, 00:38:18.468 "method": "bdev_nvme_attach_controller" 00:38:18.468 }, 00:38:18.468 { 00:38:18.468 "method": "bdev_wait_for_examine" 00:38:18.468 } 00:38:18.468 ] 00:38:18.468 } 00:38:18.468 ] 00:38:18.468 } 00:38:18.468 [2024-07-21 12:20:17.080980] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:18.468 [2024-07-21 12:20:17.081292] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175254 ] 00:38:18.468 [2024-07-21 12:20:17.248034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.468 [2024-07-21 12:20:17.310508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.984  Copying: 48/48 [kB] (average 46 MBps) 00:38:18.984 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:18.984 12:20:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:18.984 { 00:38:18.984 "subsystems": [ 00:38:18.984 { 00:38:18.984 "subsystem": "bdev", 00:38:18.984 "config": [ 00:38:18.984 { 00:38:18.984 "params": { 00:38:18.984 "trtype": "pcie", 00:38:18.984 "traddr": "0000:00:10.0", 00:38:18.984 "name": "Nvme0" 00:38:18.984 }, 00:38:18.984 "method": "bdev_nvme_attach_controller" 00:38:18.984 }, 00:38:18.984 { 00:38:18.984 "method": "bdev_wait_for_examine" 00:38:18.984 } 00:38:18.984 ] 00:38:18.984 } 00:38:18.984 ] 00:38:18.984 } 00:38:18.984 [2024-07-21 12:20:17.834316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:18.984 [2024-07-21 12:20:17.834713] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175270 ] 00:38:19.243 [2024-07-21 12:20:17.999577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:19.243 [2024-07-21 12:20:18.059748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:19.786  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:19.786 00:38:19.786 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:38:19.786 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:38:19.786 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:38:19.786 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:38:19.786 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:38:19.786 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:38:19.786 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:20.365 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:38:20.365 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:38:20.365 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:20.365 12:20:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:20.365 { 00:38:20.365 "subsystems": [ 00:38:20.365 { 00:38:20.365 "subsystem": "bdev", 00:38:20.365 "config": [ 00:38:20.365 { 00:38:20.365 "params": { 00:38:20.365 "trtype": "pcie", 00:38:20.365 "traddr": "0000:00:10.0", 00:38:20.365 "name": "Nvme0" 00:38:20.365 }, 00:38:20.365 "method": "bdev_nvme_attach_controller" 00:38:20.365 }, 00:38:20.365 { 00:38:20.365 "method": "bdev_wait_for_examine" 00:38:20.365 } 00:38:20.365 ] 00:38:20.365 } 00:38:20.365 ] 00:38:20.365 } 00:38:20.365 [2024-07-21 12:20:18.995575] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:20.365 [2024-07-21 12:20:18.996102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175290 ] 00:38:20.365 [2024-07-21 12:20:19.163744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.365 [2024-07-21 12:20:19.229221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.882  Copying: 48/48 [kB] (average 46 MBps) 00:38:20.882 00:38:20.882 12:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:38:20.882 12:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:38:20.882 12:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:20.882 12:20:19 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:20.882 { 00:38:20.882 "subsystems": [ 00:38:20.882 { 00:38:20.882 "subsystem": "bdev", 00:38:20.882 "config": [ 00:38:20.882 { 00:38:20.882 "params": { 00:38:20.882 "trtype": "pcie", 00:38:20.882 "traddr": "0000:00:10.0", 00:38:20.882 "name": "Nvme0" 00:38:20.882 }, 00:38:20.882 "method": "bdev_nvme_attach_controller" 00:38:20.882 }, 00:38:20.882 { 00:38:20.882 "method": "bdev_wait_for_examine" 00:38:20.882 } 00:38:20.882 ] 00:38:20.882 } 00:38:20.882 ] 00:38:20.882 } 00:38:20.882 [2024-07-21 12:20:19.726219] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:20.882 [2024-07-21 12:20:19.726645] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175308 ] 00:38:21.141 [2024-07-21 12:20:19.890937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.141 [2024-07-21 12:20:19.947510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.657  Copying: 48/48 [kB] (average 46 MBps) 00:38:21.657 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:21.657 12:20:20 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:21.657 [2024-07-21 12:20:20.447728] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:21.657 [2024-07-21 12:20:20.448199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175319 ] 00:38:21.657 { 00:38:21.657 "subsystems": [ 00:38:21.657 { 00:38:21.657 "subsystem": "bdev", 00:38:21.657 "config": [ 00:38:21.657 { 00:38:21.657 "params": { 00:38:21.657 "trtype": "pcie", 00:38:21.657 "traddr": "0000:00:10.0", 00:38:21.657 "name": "Nvme0" 00:38:21.657 }, 00:38:21.657 "method": "bdev_nvme_attach_controller" 00:38:21.657 }, 00:38:21.657 { 00:38:21.657 "method": "bdev_wait_for_examine" 00:38:21.657 } 00:38:21.657 ] 00:38:21.657 } 00:38:21.657 ] 00:38:21.657 } 00:38:21.915 [2024-07-21 12:20:20.614810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.915 [2024-07-21 12:20:20.680224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:22.433  Copying: 1024/1024 [kB] (average 500 MBps) 00:38:22.433 00:38:22.433 00:38:22.433 real 0m16.154s 00:38:22.433 user 0m10.532s 00:38:22.433 sys 0m4.146s 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:22.433 ************************************ 00:38:22.433 END TEST dd_rw 00:38:22.433 ************************************ 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:22.433 ************************************ 00:38:22.433 START TEST dd_rw_offset 00:38:22.433 ************************************ 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1121 -- # basic_offset 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=14jfz5beib48bgoqtfrlq270n3xi0thgxlsx6wbfubv2zv3wb21qd53s2uq9850r57wk3xw3q3w32izfd7zbb6wv8xf2bda39et9n7vubsd9jebcc8g5f9i4me7lj6xcnveako8fwhkpwz0ptvsh9cojtyns7b5c2jygjetm00v9rwr4xi07pk8044w2pqany6hbfaaa9584dwjl1cp1ccaejxswjc11sp3hiuf67ffbfog5r2sqamxdkk2owt0rjxq6qwjqqmkhf0wojdf6llw29dko62qn4rron2vny7mxo7quyke0nna6jd5dmtd0wpx1jity44b0q1x56bjlqyhyf4mxzbi6cgzl55htatyywytkd2iz3f3igxcnkgsfptcsyowhn8mnj6tau01e00bnhv27ghw1xvaqc9phjbcj56av6brxrtnb140ekefiqkldhy67m8wv35fr0laxf8kka0a4xw2dj2sc1qo7wo6c2itf4ijbam5u2x8qyxzg6ej4nprgtz7takbsz0ojk8rddn6xl887j93ung5adrb9t4ffskxohzmjlkttha0ncd7gwkummxlgldome0pa0q7zkbgtcujkuwg1uo2iy6fcctocex698yz1fimav97hibkr8vn1pcnsuhdtfj26lq69hqloacalgskp54l8sh52a1cc76y1v6agik8merflhdwt6hn3qngwoozs0oo4ieam5qg5p0j23royb1qc0hr3feru7k3vt41bcc09ui4glvykpqg4q0pstevfzfd4yha9yy0ue3t4fz92sv9txea1hm10ghk9y54edawid8r83ab3989bd0karif3ktwi8wa1gspgsekeao3az040hf76hlip51vjwwd1zjbi485h4oj515wfhmib5rwz6uff4uyg4x4fs1uz73tmj9bg9cf3b6gmos5jhb675o1tv2nomx18d6qxh301mdnjj0utoey48evb0ynnuqjxrs6wy2qpp15qmrj3xmonxase5uz9iotr9qn6l2dclmz1xr8vlx11nwwc0o07p0g9bl9u7elmpxxrsxhufyeraey02q7vyt0dhrtoelmlgls9qgacevsmcpguzqheaxne93epsvhy0fea9aimpznx9d2zmupd3y2mlw5gduzcjy34ca77ztqvvq0i80bc7hfq6shwgq0ryjam12hhortr868uyg3lsv3xj6b0oa2og7pe6ktqfbt3aoka4xiy2fgnyvj33cgp2jruzttvckoqga6qronu618z943edoh5es1gyxrz3kiddnwcfcvzjaatzvi62g153busi7nihif4i1juz0kv4qbglm4v56y705nwe5o5wncyf7x1ccfa69i9p4baxlbun5iej2vusqi8cerny1l6e6nb7vfbwj9rnr8kr394zkuwfqacyyzd8jaoxkl6yiefbr8azhr1hcy2ic8obxau1gz9zyi6jah3qnkaqkbljqohez8vkx60o2cpld6wr4m54w61w2ip1g72f3qxlpwpioicau0dfkkaxz9f1qx7bo1m3ywi10opxprd865d5ywnycesqy1nhf6hqbgqrnhbamcwh827988coolnol9lfblbx65d20wdd3itzzym8k9s6cyepwo5lu0bwgq9xldl2ixmnk8pkum15vrvrdj58rxewhgk9yxfpatoj9t7k1anai7w19ol5s1tpzy921yexbzqth3nc43c0w0n5qu1773kmurpt3g3hsb41hweur5lkzmj8r2edqq9e5o8fc2sih8hjz6xkgolzep3qelybwydqj1yfwgds3w6mnogxe3gq38u67awz887vr37p5dfnw03r472au6hkwv7f797s9i2ql1zbi6tqgfete3kichs68bti2s9mw5rlyfjdoakpvzw0fe54i0tpv0nfat5tc1w7a69wr70db1fz2is6k733drhrrdtt0lb96pa6mm8qmsyc7vq7sebd088l5r92eo433bgb935vh6d8lbbap35ir92rvtb2i07xwcc983zfiu5lg6thu3lsmtabvfadpuiz2zcg9azxd31c7aufqoia4l9gdwyzr7sux94gd91s1vl1dovijr667hvmjy88olx84ob15xxutjt7jk2875kj6i42y7wvxo4l5hkoxdjkrro2egdk8tszyu0gmxwcjddn2esdt3uor6fid1o603wcm1s86dtw47ps65bflvrcka9wjqu2n3hqalh1ov48jxmfdvara6fsu7p1659ncrv6wsku1tfit3dfiud211m5g9tl71yvxm7z7oxknzneqolgmu8bilwdxdy9bviltyr9wzj9zrl0gu7dzqfzqx0snra3d387c3kosf7aqbd8dsfj8jdelrk51i05kborzx79owhwyke6e41u3w6hwxwuxwmwdqjbtbi5ueijzgecuhn04nzjxgnrr4qou2v84wxjs7ivqvctvtwbaoijrb80h8w49h9w2m1l4b1mt5eqh6ilqgfkkattt2l5ov436tqmfgpobjjxa9pwbc2ddlv29y8jk7yhbe0j41wtuhio9isa9glnasjd56tvexihm6tzewgdmdw7ugi1295wkt748qbjfhgrwx5s643f2o9sh23qnniilsqoyhd3m37mc7amor8ffpodr6r52pit9p0c06jqtbgdaz76qybheguhhh6l91icepe7kcn2h07twszazbdbrfhfg6boffkmnp8vaolb3lsxi95tdnre7b7rn80rigguu53vuqrsc7ior2dwgnc3zdxltilu1jcnavkkj6twlwfj4b06gfccngdj7ypcgxupn980oj0peyx5rufitu0o9yykzo8wmp78xabbypr8cjw3ppyg2j4tzkthx1al3y30hbru0b7f4rwpv7y5gp87ig9bwdh1gl2zq76q6j1p28eq5myvdqj3b44m0h4l95725yeott723n9vpua370fu4fy43uugd1sn1rmeczfi0y44by9m71b7oyahfsgrir22bvjnhxk86djwblbfcwch3u7vkzpgbvyrj4jo7r36ssrcghuy2payp8au1wbi6dqtktiy5wuymboaw5bkrkxud1if2db75vviayh98rz12oyz989pttd4se860aijzxsfoo2xg58bv5t8hwz9c227hnvpy19950iqqgfraa4odktsmx3o67qzplcv56ql0e4hvso9k5ltmtug5to1ec9r9zajolew8je88j7v9g9du6jocnb17nmf4s1ol65kzzyckbidkv2hlidnvqyai69817o2n3dyxihdzk7q8kq8fxcne6hf7soiqcd9llm1ple6etwtzoc5vg6pd9iwm5q2ulwjyyck14dnuozopmomwhu99g5xa34wr6t4wjip03ie12lowegm00306nm6kueki2skrlmjocdb2a183f7xjglaitvp72j1hxdxu83uz9af8pi5jni47c0j4ajvkvsn3c1lcbl8snmszdnt1rxk7llb87xrnc70p6axzhuf3oxu1zvavefjgqemnizye4p2qxjty9wcmhwziggok8bbydov6xzxujujsrjtepu8btg0sc8uuo54333ix4ubwspbrs2wgns9ztcjezwypid38lx3szf37vz5w2cmjwhhvimh23o1vc3ltjkdxu48yh6dbdhpjevg6f7gth831gvofyhdsaax51r6deca6pmijc5zlbfgml5g5eizs54xt7oyapfhosut7zm1kb9cjvgi6q9347wuwyy30gxjp21cxi1qqdoczmh04vbznv6dh7ngjiuup80h1fgntgvor6xa8pl3ts6n70cpu7avgg4wnozdglh2d9j3502qedv9jm5hsoxhswm3hemkz7579wydfjle8byj377tjeum9nxb9iwkdyiatycf3wb89xhborr3xnsp2piqy9hkdfehnyaxyvph0sxc0mvgg08is7ceg9i09rg276lk8e1dvjjm29bw52k68uek0t8cuv0i8ztvnvd9fgu58526ml0hhjgx770epuawzwwp9wiyo1r69zkmbnlemteqkyvntyb 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:38:22.433 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:22.433 [2024-07-21 12:20:21.287925] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:22.433 [2024-07-21 12:20:21.288360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175359 ] 00:38:22.433 { 00:38:22.433 "subsystems": [ 00:38:22.433 { 00:38:22.433 "subsystem": "bdev", 00:38:22.433 "config": [ 00:38:22.433 { 00:38:22.433 "params": { 00:38:22.433 "trtype": "pcie", 00:38:22.433 "traddr": "0000:00:10.0", 00:38:22.433 "name": "Nvme0" 00:38:22.433 }, 00:38:22.433 "method": "bdev_nvme_attach_controller" 00:38:22.433 }, 00:38:22.433 { 00:38:22.433 "method": "bdev_wait_for_examine" 00:38:22.433 } 00:38:22.433 ] 00:38:22.433 } 00:38:22.433 ] 00:38:22.434 } 00:38:22.693 [2024-07-21 12:20:21.440195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:22.693 [2024-07-21 12:20:21.495082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.209  Copying: 4096/4096 [B] (average 4000 kBps) 00:38:23.209 00:38:23.210 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:38:23.210 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:38:23.210 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:38:23.210 12:20:21 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:23.210 { 00:38:23.210 "subsystems": [ 00:38:23.210 { 00:38:23.210 "subsystem": "bdev", 00:38:23.210 "config": [ 00:38:23.210 { 00:38:23.210 "params": { 00:38:23.210 "trtype": "pcie", 00:38:23.210 "traddr": "0000:00:10.0", 00:38:23.210 "name": "Nvme0" 00:38:23.210 }, 00:38:23.210 "method": "bdev_nvme_attach_controller" 00:38:23.210 }, 00:38:23.210 { 00:38:23.210 "method": "bdev_wait_for_examine" 00:38:23.210 } 00:38:23.210 ] 00:38:23.210 } 00:38:23.210 ] 00:38:23.210 } 00:38:23.210 [2024-07-21 12:20:21.990076] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:23.210 [2024-07-21 12:20:21.990665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175376 ] 00:38:23.468 [2024-07-21 12:20:22.156681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.468 [2024-07-21 12:20:22.225727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.985  Copying: 4096/4096 [B] (average 4000 kBps) 00:38:23.985 00:38:23.985 12:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:38:23.985 12:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 14jfz5beib48bgoqtfrlq270n3xi0thgxlsx6wbfubv2zv3wb21qd53s2uq9850r57wk3xw3q3w32izfd7zbb6wv8xf2bda39et9n7vubsd9jebcc8g5f9i4me7lj6xcnveako8fwhkpwz0ptvsh9cojtyns7b5c2jygjetm00v9rwr4xi07pk8044w2pqany6hbfaaa9584dwjl1cp1ccaejxswjc11sp3hiuf67ffbfog5r2sqamxdkk2owt0rjxq6qwjqqmkhf0wojdf6llw29dko62qn4rron2vny7mxo7quyke0nna6jd5dmtd0wpx1jity44b0q1x56bjlqyhyf4mxzbi6cgzl55htatyywytkd2iz3f3igxcnkgsfptcsyowhn8mnj6tau01e00bnhv27ghw1xvaqc9phjbcj56av6brxrtnb140ekefiqkldhy67m8wv35fr0laxf8kka0a4xw2dj2sc1qo7wo6c2itf4ijbam5u2x8qyxzg6ej4nprgtz7takbsz0ojk8rddn6xl887j93ung5adrb9t4ffskxohzmjlkttha0ncd7gwkummxlgldome0pa0q7zkbgtcujkuwg1uo2iy6fcctocex698yz1fimav97hibkr8vn1pcnsuhdtfj26lq69hqloacalgskp54l8sh52a1cc76y1v6agik8merflhdwt6hn3qngwoozs0oo4ieam5qg5p0j23royb1qc0hr3feru7k3vt41bcc09ui4glvykpqg4q0pstevfzfd4yha9yy0ue3t4fz92sv9txea1hm10ghk9y54edawid8r83ab3989bd0karif3ktwi8wa1gspgsekeao3az040hf76hlip51vjwwd1zjbi485h4oj515wfhmib5rwz6uff4uyg4x4fs1uz73tmj9bg9cf3b6gmos5jhb675o1tv2nomx18d6qxh301mdnjj0utoey48evb0ynnuqjxrs6wy2qpp15qmrj3xmonxase5uz9iotr9qn6l2dclmz1xr8vlx11nwwc0o07p0g9bl9u7elmpxxrsxhufyeraey02q7vyt0dhrtoelmlgls9qgacevsmcpguzqheaxne93epsvhy0fea9aimpznx9d2zmupd3y2mlw5gduzcjy34ca77ztqvvq0i80bc7hfq6shwgq0ryjam12hhortr868uyg3lsv3xj6b0oa2og7pe6ktqfbt3aoka4xiy2fgnyvj33cgp2jruzttvckoqga6qronu618z943edoh5es1gyxrz3kiddnwcfcvzjaatzvi62g153busi7nihif4i1juz0kv4qbglm4v56y705nwe5o5wncyf7x1ccfa69i9p4baxlbun5iej2vusqi8cerny1l6e6nb7vfbwj9rnr8kr394zkuwfqacyyzd8jaoxkl6yiefbr8azhr1hcy2ic8obxau1gz9zyi6jah3qnkaqkbljqohez8vkx60o2cpld6wr4m54w61w2ip1g72f3qxlpwpioicau0dfkkaxz9f1qx7bo1m3ywi10opxprd865d5ywnycesqy1nhf6hqbgqrnhbamcwh827988coolnol9lfblbx65d20wdd3itzzym8k9s6cyepwo5lu0bwgq9xldl2ixmnk8pkum15vrvrdj58rxewhgk9yxfpatoj9t7k1anai7w19ol5s1tpzy921yexbzqth3nc43c0w0n5qu1773kmurpt3g3hsb41hweur5lkzmj8r2edqq9e5o8fc2sih8hjz6xkgolzep3qelybwydqj1yfwgds3w6mnogxe3gq38u67awz887vr37p5dfnw03r472au6hkwv7f797s9i2ql1zbi6tqgfete3kichs68bti2s9mw5rlyfjdoakpvzw0fe54i0tpv0nfat5tc1w7a69wr70db1fz2is6k733drhrrdtt0lb96pa6mm8qmsyc7vq7sebd088l5r92eo433bgb935vh6d8lbbap35ir92rvtb2i07xwcc983zfiu5lg6thu3lsmtabvfadpuiz2zcg9azxd31c7aufqoia4l9gdwyzr7sux94gd91s1vl1dovijr667hvmjy88olx84ob15xxutjt7jk2875kj6i42y7wvxo4l5hkoxdjkrro2egdk8tszyu0gmxwcjddn2esdt3uor6fid1o603wcm1s86dtw47ps65bflvrcka9wjqu2n3hqalh1ov48jxmfdvara6fsu7p1659ncrv6wsku1tfit3dfiud211m5g9tl71yvxm7z7oxknzneqolgmu8bilwdxdy9bviltyr9wzj9zrl0gu7dzqfzqx0snra3d387c3kosf7aqbd8dsfj8jdelrk51i05kborzx79owhwyke6e41u3w6hwxwuxwmwdqjbtbi5ueijzgecuhn04nzjxgnrr4qou2v84wxjs7ivqvctvtwbaoijrb80h8w49h9w2m1l4b1mt5eqh6ilqgfkkattt2l5ov436tqmfgpobjjxa9pwbc2ddlv29y8jk7yhbe0j41wtuhio9isa9glnasjd56tvexihm6tzewgdmdw7ugi1295wkt748qbjfhgrwx5s643f2o9sh23qnniilsqoyhd3m37mc7amor8ffpodr6r52pit9p0c06jqtbgdaz76qybheguhhh6l91icepe7kcn2h07twszazbdbrfhfg6boffkmnp8vaolb3lsxi95tdnre7b7rn80rigguu53vuqrsc7ior2dwgnc3zdxltilu1jcnavkkj6twlwfj4b06gfccngdj7ypcgxupn980oj0peyx5rufitu0o9yykzo8wmp78xabbypr8cjw3ppyg2j4tzkthx1al3y30hbru0b7f4rwpv7y5gp87ig9bwdh1gl2zq76q6j1p28eq5myvdqj3b44m0h4l95725yeott723n9vpua370fu4fy43uugd1sn1rmeczfi0y44by9m71b7oyahfsgrir22bvjnhxk86djwblbfcwch3u7vkzpgbvyrj4jo7r36ssrcghuy2payp8au1wbi6dqtktiy5wuymboaw5bkrkxud1if2db75vviayh98rz12oyz989pttd4se860aijzxsfoo2xg58bv5t8hwz9c227hnvpy19950iqqgfraa4odktsmx3o67qzplcv56ql0e4hvso9k5ltmtug5to1ec9r9zajolew8je88j7v9g9du6jocnb17nmf4s1ol65kzzyckbidkv2hlidnvqyai69817o2n3dyxihdzk7q8kq8fxcne6hf7soiqcd9llm1ple6etwtzoc5vg6pd9iwm5q2ulwjyyck14dnuozopmomwhu99g5xa34wr6t4wjip03ie12lowegm00306nm6kueki2skrlmjocdb2a183f7xjglaitvp72j1hxdxu83uz9af8pi5jni47c0j4ajvkvsn3c1lcbl8snmszdnt1rxk7llb87xrnc70p6axzhuf3oxu1zvavefjgqemnizye4p2qxjty9wcmhwziggok8bbydov6xzxujujsrjtepu8btg0sc8uuo54333ix4ubwspbrs2wgns9ztcjezwypid38lx3szf37vz5w2cmjwhhvimh23o1vc3ltjkdxu48yh6dbdhpjevg6f7gth831gvofyhdsaax51r6deca6pmijc5zlbfgml5g5eizs54xt7oyapfhosut7zm1kb9cjvgi6q9347wuwyy30gxjp21cxi1qqdoczmh04vbznv6dh7ngjiuup80h1fgntgvor6xa8pl3ts6n70cpu7avgg4wnozdglh2d9j3502qedv9jm5hsoxhswm3hemkz7579wydfjle8byj377tjeum9nxb9iwkdyiatycf3wb89xhborr3xnsp2piqy9hkdfehnyaxyvph0sxc0mvgg08is7ceg9i09rg276lk8e1dvjjm29bw52k68uek0t8cuv0i8ztvnvd9fgu58526ml0hhjgx770epuawzwwp9wiyo1r69zkmbnlemteqkyvntyb == \1\4\j\f\z\5\b\e\i\b\4\8\b\g\o\q\t\f\r\l\q\2\7\0\n\3\x\i\0\t\h\g\x\l\s\x\6\w\b\f\u\b\v\2\z\v\3\w\b\2\1\q\d\5\3\s\2\u\q\9\8\5\0\r\5\7\w\k\3\x\w\3\q\3\w\3\2\i\z\f\d\7\z\b\b\6\w\v\8\x\f\2\b\d\a\3\9\e\t\9\n\7\v\u\b\s\d\9\j\e\b\c\c\8\g\5\f\9\i\4\m\e\7\l\j\6\x\c\n\v\e\a\k\o\8\f\w\h\k\p\w\z\0\p\t\v\s\h\9\c\o\j\t\y\n\s\7\b\5\c\2\j\y\g\j\e\t\m\0\0\v\9\r\w\r\4\x\i\0\7\p\k\8\0\4\4\w\2\p\q\a\n\y\6\h\b\f\a\a\a\9\5\8\4\d\w\j\l\1\c\p\1\c\c\a\e\j\x\s\w\j\c\1\1\s\p\3\h\i\u\f\6\7\f\f\b\f\o\g\5\r\2\s\q\a\m\x\d\k\k\2\o\w\t\0\r\j\x\q\6\q\w\j\q\q\m\k\h\f\0\w\o\j\d\f\6\l\l\w\2\9\d\k\o\6\2\q\n\4\r\r\o\n\2\v\n\y\7\m\x\o\7\q\u\y\k\e\0\n\n\a\6\j\d\5\d\m\t\d\0\w\p\x\1\j\i\t\y\4\4\b\0\q\1\x\5\6\b\j\l\q\y\h\y\f\4\m\x\z\b\i\6\c\g\z\l\5\5\h\t\a\t\y\y\w\y\t\k\d\2\i\z\3\f\3\i\g\x\c\n\k\g\s\f\p\t\c\s\y\o\w\h\n\8\m\n\j\6\t\a\u\0\1\e\0\0\b\n\h\v\2\7\g\h\w\1\x\v\a\q\c\9\p\h\j\b\c\j\5\6\a\v\6\b\r\x\r\t\n\b\1\4\0\e\k\e\f\i\q\k\l\d\h\y\6\7\m\8\w\v\3\5\f\r\0\l\a\x\f\8\k\k\a\0\a\4\x\w\2\d\j\2\s\c\1\q\o\7\w\o\6\c\2\i\t\f\4\i\j\b\a\m\5\u\2\x\8\q\y\x\z\g\6\e\j\4\n\p\r\g\t\z\7\t\a\k\b\s\z\0\o\j\k\8\r\d\d\n\6\x\l\8\8\7\j\9\3\u\n\g\5\a\d\r\b\9\t\4\f\f\s\k\x\o\h\z\m\j\l\k\t\t\h\a\0\n\c\d\7\g\w\k\u\m\m\x\l\g\l\d\o\m\e\0\p\a\0\q\7\z\k\b\g\t\c\u\j\k\u\w\g\1\u\o\2\i\y\6\f\c\c\t\o\c\e\x\6\9\8\y\z\1\f\i\m\a\v\9\7\h\i\b\k\r\8\v\n\1\p\c\n\s\u\h\d\t\f\j\2\6\l\q\6\9\h\q\l\o\a\c\a\l\g\s\k\p\5\4\l\8\s\h\5\2\a\1\c\c\7\6\y\1\v\6\a\g\i\k\8\m\e\r\f\l\h\d\w\t\6\h\n\3\q\n\g\w\o\o\z\s\0\o\o\4\i\e\a\m\5\q\g\5\p\0\j\2\3\r\o\y\b\1\q\c\0\h\r\3\f\e\r\u\7\k\3\v\t\4\1\b\c\c\0\9\u\i\4\g\l\v\y\k\p\q\g\4\q\0\p\s\t\e\v\f\z\f\d\4\y\h\a\9\y\y\0\u\e\3\t\4\f\z\9\2\s\v\9\t\x\e\a\1\h\m\1\0\g\h\k\9\y\5\4\e\d\a\w\i\d\8\r\8\3\a\b\3\9\8\9\b\d\0\k\a\r\i\f\3\k\t\w\i\8\w\a\1\g\s\p\g\s\e\k\e\a\o\3\a\z\0\4\0\h\f\7\6\h\l\i\p\5\1\v\j\w\w\d\1\z\j\b\i\4\8\5\h\4\o\j\5\1\5\w\f\h\m\i\b\5\r\w\z\6\u\f\f\4\u\y\g\4\x\4\f\s\1\u\z\7\3\t\m\j\9\b\g\9\c\f\3\b\6\g\m\o\s\5\j\h\b\6\7\5\o\1\t\v\2\n\o\m\x\1\8\d\6\q\x\h\3\0\1\m\d\n\j\j\0\u\t\o\e\y\4\8\e\v\b\0\y\n\n\u\q\j\x\r\s\6\w\y\2\q\p\p\1\5\q\m\r\j\3\x\m\o\n\x\a\s\e\5\u\z\9\i\o\t\r\9\q\n\6\l\2\d\c\l\m\z\1\x\r\8\v\l\x\1\1\n\w\w\c\0\o\0\7\p\0\g\9\b\l\9\u\7\e\l\m\p\x\x\r\s\x\h\u\f\y\e\r\a\e\y\0\2\q\7\v\y\t\0\d\h\r\t\o\e\l\m\l\g\l\s\9\q\g\a\c\e\v\s\m\c\p\g\u\z\q\h\e\a\x\n\e\9\3\e\p\s\v\h\y\0\f\e\a\9\a\i\m\p\z\n\x\9\d\2\z\m\u\p\d\3\y\2\m\l\w\5\g\d\u\z\c\j\y\3\4\c\a\7\7\z\t\q\v\v\q\0\i\8\0\b\c\7\h\f\q\6\s\h\w\g\q\0\r\y\j\a\m\1\2\h\h\o\r\t\r\8\6\8\u\y\g\3\l\s\v\3\x\j\6\b\0\o\a\2\o\g\7\p\e\6\k\t\q\f\b\t\3\a\o\k\a\4\x\i\y\2\f\g\n\y\v\j\3\3\c\g\p\2\j\r\u\z\t\t\v\c\k\o\q\g\a\6\q\r\o\n\u\6\1\8\z\9\4\3\e\d\o\h\5\e\s\1\g\y\x\r\z\3\k\i\d\d\n\w\c\f\c\v\z\j\a\a\t\z\v\i\6\2\g\1\5\3\b\u\s\i\7\n\i\h\i\f\4\i\1\j\u\z\0\k\v\4\q\b\g\l\m\4\v\5\6\y\7\0\5\n\w\e\5\o\5\w\n\c\y\f\7\x\1\c\c\f\a\6\9\i\9\p\4\b\a\x\l\b\u\n\5\i\e\j\2\v\u\s\q\i\8\c\e\r\n\y\1\l\6\e\6\n\b\7\v\f\b\w\j\9\r\n\r\8\k\r\3\9\4\z\k\u\w\f\q\a\c\y\y\z\d\8\j\a\o\x\k\l\6\y\i\e\f\b\r\8\a\z\h\r\1\h\c\y\2\i\c\8\o\b\x\a\u\1\g\z\9\z\y\i\6\j\a\h\3\q\n\k\a\q\k\b\l\j\q\o\h\e\z\8\v\k\x\6\0\o\2\c\p\l\d\6\w\r\4\m\5\4\w\6\1\w\2\i\p\1\g\7\2\f\3\q\x\l\p\w\p\i\o\i\c\a\u\0\d\f\k\k\a\x\z\9\f\1\q\x\7\b\o\1\m\3\y\w\i\1\0\o\p\x\p\r\d\8\6\5\d\5\y\w\n\y\c\e\s\q\y\1\n\h\f\6\h\q\b\g\q\r\n\h\b\a\m\c\w\h\8\2\7\9\8\8\c\o\o\l\n\o\l\9\l\f\b\l\b\x\6\5\d\2\0\w\d\d\3\i\t\z\z\y\m\8\k\9\s\6\c\y\e\p\w\o\5\l\u\0\b\w\g\q\9\x\l\d\l\2\i\x\m\n\k\8\p\k\u\m\1\5\v\r\v\r\d\j\5\8\r\x\e\w\h\g\k\9\y\x\f\p\a\t\o\j\9\t\7\k\1\a\n\a\i\7\w\1\9\o\l\5\s\1\t\p\z\y\9\2\1\y\e\x\b\z\q\t\h\3\n\c\4\3\c\0\w\0\n\5\q\u\1\7\7\3\k\m\u\r\p\t\3\g\3\h\s\b\4\1\h\w\e\u\r\5\l\k\z\m\j\8\r\2\e\d\q\q\9\e\5\o\8\f\c\2\s\i\h\8\h\j\z\6\x\k\g\o\l\z\e\p\3\q\e\l\y\b\w\y\d\q\j\1\y\f\w\g\d\s\3\w\6\m\n\o\g\x\e\3\g\q\3\8\u\6\7\a\w\z\8\8\7\v\r\3\7\p\5\d\f\n\w\0\3\r\4\7\2\a\u\6\h\k\w\v\7\f\7\9\7\s\9\i\2\q\l\1\z\b\i\6\t\q\g\f\e\t\e\3\k\i\c\h\s\6\8\b\t\i\2\s\9\m\w\5\r\l\y\f\j\d\o\a\k\p\v\z\w\0\f\e\5\4\i\0\t\p\v\0\n\f\a\t\5\t\c\1\w\7\a\6\9\w\r\7\0\d\b\1\f\z\2\i\s\6\k\7\3\3\d\r\h\r\r\d\t\t\0\l\b\9\6\p\a\6\m\m\8\q\m\s\y\c\7\v\q\7\s\e\b\d\0\8\8\l\5\r\9\2\e\o\4\3\3\b\g\b\9\3\5\v\h\6\d\8\l\b\b\a\p\3\5\i\r\9\2\r\v\t\b\2\i\0\7\x\w\c\c\9\8\3\z\f\i\u\5\l\g\6\t\h\u\3\l\s\m\t\a\b\v\f\a\d\p\u\i\z\2\z\c\g\9\a\z\x\d\3\1\c\7\a\u\f\q\o\i\a\4\l\9\g\d\w\y\z\r\7\s\u\x\9\4\g\d\9\1\s\1\v\l\1\d\o\v\i\j\r\6\6\7\h\v\m\j\y\8\8\o\l\x\8\4\o\b\1\5\x\x\u\t\j\t\7\j\k\2\8\7\5\k\j\6\i\4\2\y\7\w\v\x\o\4\l\5\h\k\o\x\d\j\k\r\r\o\2\e\g\d\k\8\t\s\z\y\u\0\g\m\x\w\c\j\d\d\n\2\e\s\d\t\3\u\o\r\6\f\i\d\1\o\6\0\3\w\c\m\1\s\8\6\d\t\w\4\7\p\s\6\5\b\f\l\v\r\c\k\a\9\w\j\q\u\2\n\3\h\q\a\l\h\1\o\v\4\8\j\x\m\f\d\v\a\r\a\6\f\s\u\7\p\1\6\5\9\n\c\r\v\6\w\s\k\u\1\t\f\i\t\3\d\f\i\u\d\2\1\1\m\5\g\9\t\l\7\1\y\v\x\m\7\z\7\o\x\k\n\z\n\e\q\o\l\g\m\u\8\b\i\l\w\d\x\d\y\9\b\v\i\l\t\y\r\9\w\z\j\9\z\r\l\0\g\u\7\d\z\q\f\z\q\x\0\s\n\r\a\3\d\3\8\7\c\3\k\o\s\f\7\a\q\b\d\8\d\s\f\j\8\j\d\e\l\r\k\5\1\i\0\5\k\b\o\r\z\x\7\9\o\w\h\w\y\k\e\6\e\4\1\u\3\w\6\h\w\x\w\u\x\w\m\w\d\q\j\b\t\b\i\5\u\e\i\j\z\g\e\c\u\h\n\0\4\n\z\j\x\g\n\r\r\4\q\o\u\2\v\8\4\w\x\j\s\7\i\v\q\v\c\t\v\t\w\b\a\o\i\j\r\b\8\0\h\8\w\4\9\h\9\w\2\m\1\l\4\b\1\m\t\5\e\q\h\6\i\l\q\g\f\k\k\a\t\t\t\2\l\5\o\v\4\3\6\t\q\m\f\g\p\o\b\j\j\x\a\9\p\w\b\c\2\d\d\l\v\2\9\y\8\j\k\7\y\h\b\e\0\j\4\1\w\t\u\h\i\o\9\i\s\a\9\g\l\n\a\s\j\d\5\6\t\v\e\x\i\h\m\6\t\z\e\w\g\d\m\d\w\7\u\g\i\1\2\9\5\w\k\t\7\4\8\q\b\j\f\h\g\r\w\x\5\s\6\4\3\f\2\o\9\s\h\2\3\q\n\n\i\i\l\s\q\o\y\h\d\3\m\3\7\m\c\7\a\m\o\r\8\f\f\p\o\d\r\6\r\5\2\p\i\t\9\p\0\c\0\6\j\q\t\b\g\d\a\z\7\6\q\y\b\h\e\g\u\h\h\h\6\l\9\1\i\c\e\p\e\7\k\c\n\2\h\0\7\t\w\s\z\a\z\b\d\b\r\f\h\f\g\6\b\o\f\f\k\m\n\p\8\v\a\o\l\b\3\l\s\x\i\9\5\t\d\n\r\e\7\b\7\r\n\8\0\r\i\g\g\u\u\5\3\v\u\q\r\s\c\7\i\o\r\2\d\w\g\n\c\3\z\d\x\l\t\i\l\u\1\j\c\n\a\v\k\k\j\6\t\w\l\w\f\j\4\b\0\6\g\f\c\c\n\g\d\j\7\y\p\c\g\x\u\p\n\9\8\0\o\j\0\p\e\y\x\5\r\u\f\i\t\u\0\o\9\y\y\k\z\o\8\w\m\p\7\8\x\a\b\b\y\p\r\8\c\j\w\3\p\p\y\g\2\j\4\t\z\k\t\h\x\1\a\l\3\y\3\0\h\b\r\u\0\b\7\f\4\r\w\p\v\7\y\5\g\p\8\7\i\g\9\b\w\d\h\1\g\l\2\z\q\7\6\q\6\j\1\p\2\8\e\q\5\m\y\v\d\q\j\3\b\4\4\m\0\h\4\l\9\5\7\2\5\y\e\o\t\t\7\2\3\n\9\v\p\u\a\3\7\0\f\u\4\f\y\4\3\u\u\g\d\1\s\n\1\r\m\e\c\z\f\i\0\y\4\4\b\y\9\m\7\1\b\7\o\y\a\h\f\s\g\r\i\r\2\2\b\v\j\n\h\x\k\8\6\d\j\w\b\l\b\f\c\w\c\h\3\u\7\v\k\z\p\g\b\v\y\r\j\4\j\o\7\r\3\6\s\s\r\c\g\h\u\y\2\p\a\y\p\8\a\u\1\w\b\i\6\d\q\t\k\t\i\y\5\w\u\y\m\b\o\a\w\5\b\k\r\k\x\u\d\1\i\f\2\d\b\7\5\v\v\i\a\y\h\9\8\r\z\1\2\o\y\z\9\8\9\p\t\t\d\4\s\e\8\6\0\a\i\j\z\x\s\f\o\o\2\x\g\5\8\b\v\5\t\8\h\w\z\9\c\2\2\7\h\n\v\p\y\1\9\9\5\0\i\q\q\g\f\r\a\a\4\o\d\k\t\s\m\x\3\o\6\7\q\z\p\l\c\v\5\6\q\l\0\e\4\h\v\s\o\9\k\5\l\t\m\t\u\g\5\t\o\1\e\c\9\r\9\z\a\j\o\l\e\w\8\j\e\8\8\j\7\v\9\g\9\d\u\6\j\o\c\n\b\1\7\n\m\f\4\s\1\o\l\6\5\k\z\z\y\c\k\b\i\d\k\v\2\h\l\i\d\n\v\q\y\a\i\6\9\8\1\7\o\2\n\3\d\y\x\i\h\d\z\k\7\q\8\k\q\8\f\x\c\n\e\6\h\f\7\s\o\i\q\c\d\9\l\l\m\1\p\l\e\6\e\t\w\t\z\o\c\5\v\g\6\p\d\9\i\w\m\5\q\2\u\l\w\j\y\y\c\k\1\4\d\n\u\o\z\o\p\m\o\m\w\h\u\9\9\g\5\x\a\3\4\w\r\6\t\4\w\j\i\p\0\3\i\e\1\2\l\o\w\e\g\m\0\0\3\0\6\n\m\6\k\u\e\k\i\2\s\k\r\l\m\j\o\c\d\b\2\a\1\8\3\f\7\x\j\g\l\a\i\t\v\p\7\2\j\1\h\x\d\x\u\8\3\u\z\9\a\f\8\p\i\5\j\n\i\4\7\c\0\j\4\a\j\v\k\v\s\n\3\c\1\l\c\b\l\8\s\n\m\s\z\d\n\t\1\r\x\k\7\l\l\b\8\7\x\r\n\c\7\0\p\6\a\x\z\h\u\f\3\o\x\u\1\z\v\a\v\e\f\j\g\q\e\m\n\i\z\y\e\4\p\2\q\x\j\t\y\9\w\c\m\h\w\z\i\g\g\o\k\8\b\b\y\d\o\v\6\x\z\x\u\j\u\j\s\r\j\t\e\p\u\8\b\t\g\0\s\c\8\u\u\o\5\4\3\3\3\i\x\4\u\b\w\s\p\b\r\s\2\w\g\n\s\9\z\t\c\j\e\z\w\y\p\i\d\3\8\l\x\3\s\z\f\3\7\v\z\5\w\2\c\m\j\w\h\h\v\i\m\h\2\3\o\1\v\c\3\l\t\j\k\d\x\u\4\8\y\h\6\d\b\d\h\p\j\e\v\g\6\f\7\g\t\h\8\3\1\g\v\o\f\y\h\d\s\a\a\x\5\1\r\6\d\e\c\a\6\p\m\i\j\c\5\z\l\b\f\g\m\l\5\g\5\e\i\z\s\5\4\x\t\7\o\y\a\p\f\h\o\s\u\t\7\z\m\1\k\b\9\c\j\v\g\i\6\q\9\3\4\7\w\u\w\y\y\3\0\g\x\j\p\2\1\c\x\i\1\q\q\d\o\c\z\m\h\0\4\v\b\z\n\v\6\d\h\7\n\g\j\i\u\u\p\8\0\h\1\f\g\n\t\g\v\o\r\6\x\a\8\p\l\3\t\s\6\n\7\0\c\p\u\7\a\v\g\g\4\w\n\o\z\d\g\l\h\2\d\9\j\3\5\0\2\q\e\d\v\9\j\m\5\h\s\o\x\h\s\w\m\3\h\e\m\k\z\7\5\7\9\w\y\d\f\j\l\e\8\b\y\j\3\7\7\t\j\e\u\m\9\n\x\b\9\i\w\k\d\y\i\a\t\y\c\f\3\w\b\8\9\x\h\b\o\r\r\3\x\n\s\p\2\p\i\q\y\9\h\k\d\f\e\h\n\y\a\x\y\v\p\h\0\s\x\c\0\m\v\g\g\0\8\i\s\7\c\e\g\9\i\0\9\r\g\2\7\6\l\k\8\e\1\d\v\j\j\m\2\9\b\w\5\2\k\6\8\u\e\k\0\t\8\c\u\v\0\i\8\z\t\v\n\v\d\9\f\g\u\5\8\5\2\6\m\l\0\h\h\j\g\x\7\7\0\e\p\u\a\w\z\w\w\p\9\w\i\y\o\1\r\6\9\z\k\m\b\n\l\e\m\t\e\q\k\y\v\n\t\y\b ]] 00:38:23.985 00:38:23.985 real 0m1.474s 00:38:23.985 user 0m0.889s 00:38:23.985 sys 0m0.430s 00:38:23.985 12:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:23.985 12:20:22 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:38:23.985 ************************************ 00:38:23.985 END TEST dd_rw_offset 00:38:23.986 ************************************ 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:38:23.986 12:20:22 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:23.986 { 00:38:23.986 "subsystems": [ 00:38:23.986 { 00:38:23.986 "subsystem": "bdev", 00:38:23.986 "config": [ 00:38:23.986 { 00:38:23.986 "params": { 00:38:23.986 "trtype": "pcie", 00:38:23.986 "traddr": "0000:00:10.0", 00:38:23.986 "name": "Nvme0" 00:38:23.986 }, 00:38:23.986 "method": "bdev_nvme_attach_controller" 00:38:23.986 }, 00:38:23.986 { 00:38:23.986 "method": "bdev_wait_for_examine" 00:38:23.986 } 00:38:23.986 ] 00:38:23.986 } 00:38:23.986 ] 00:38:23.986 } 00:38:23.986 [2024-07-21 12:20:22.763918] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:23.986 [2024-07-21 12:20:22.764297] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175408 ] 00:38:24.243 [2024-07-21 12:20:22.931588] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:24.243 [2024-07-21 12:20:22.995664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.759  Copying: 1024/1024 [kB] (average 1000 MBps) 00:38:24.759 00:38:24.759 12:20:23 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:24.759 ************************************ 00:38:24.759 END TEST spdk_dd_basic_rw 00:38:24.759 ************************************ 00:38:24.759 00:38:24.759 real 0m19.588s 00:38:24.759 user 0m12.515s 00:38:24.759 sys 0m5.234s 00:38:24.759 12:20:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:24.759 12:20:23 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:38:24.759 12:20:23 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:24.759 12:20:23 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:24.759 12:20:23 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:24.759 12:20:23 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:24.759 ************************************ 00:38:24.759 START TEST spdk_dd_posix 00:38:24.759 ************************************ 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:38:24.759 * Looking for test storage... 00:38:24.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:38:24.759 * First test run, using AIO 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:24.759 ************************************ 00:38:24.759 START TEST dd_flag_append 00:38:24.759 ************************************ 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1121 -- # append 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=ywg6jiagcc7e10l488g9bcpu85d6czem 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=8e63oqy4f0a49zp8lte2igtlwbfsj5t6 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s ywg6jiagcc7e10l488g9bcpu85d6czem 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 8e63oqy4f0a49zp8lte2igtlwbfsj5t6 00:38:24.759 12:20:23 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:25.017 [2024-07-21 12:20:23.647343] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:25.017 [2024-07-21 12:20:23.647595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175478 ] 00:38:25.017 [2024-07-21 12:20:23.800501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.017 [2024-07-21 12:20:23.865149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:25.533  Copying: 32/32 [B] (average 31 kBps) 00:38:25.533 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 8e63oqy4f0a49zp8lte2igtlwbfsj5t6ywg6jiagcc7e10l488g9bcpu85d6czem == \8\e\6\3\o\q\y\4\f\0\a\4\9\z\p\8\l\t\e\2\i\g\t\l\w\b\f\s\j\5\t\6\y\w\g\6\j\i\a\g\c\c\7\e\1\0\l\4\8\8\g\9\b\c\p\u\8\5\d\6\c\z\e\m ]] 00:38:25.533 00:38:25.533 real 0m0.736s 00:38:25.533 user 0m0.397s 00:38:25.533 sys 0m0.203s 00:38:25.533 ************************************ 00:38:25.533 END TEST dd_flag_append 00:38:25.533 ************************************ 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:25.533 ************************************ 00:38:25.533 START TEST dd_flag_directory 00:38:25.533 ************************************ 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1121 -- # directory 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:25.533 12:20:24 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:25.791 [2024-07-21 12:20:24.437545] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:25.791 [2024-07-21 12:20:24.437732] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175517 ] 00:38:25.791 [2024-07-21 12:20:24.586332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:25.791 [2024-07-21 12:20:24.655793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.049 [2024-07-21 12:20:24.769458] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:26.049 [2024-07-21 12:20:24.769574] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:26.049 [2024-07-21 12:20:24.769616] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:26.307 [2024-07-21 12:20:24.942544] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@648 -- # local es=0 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:26.307 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:26.307 [2024-07-21 12:20:25.131208] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:26.307 [2024-07-21 12:20:25.131447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175532 ] 00:38:26.565 [2024-07-21 12:20:25.296791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:26.565 [2024-07-21 12:20:25.364160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.824 [2024-07-21 12:20:25.476933] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:26.824 [2024-07-21 12:20:25.477041] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:26.824 [2024-07-21 12:20:25.477090] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:26.824 [2024-07-21 12:20:25.648566] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@651 -- # es=236 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@660 -- # es=108 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # case "$es" in 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@668 -- # es=1 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:27.082 00:38:27.082 real 0m1.391s 00:38:27.082 user 0m0.680s 00:38:27.082 sys 0m0.512s 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:27.082 ************************************ 00:38:27.082 END TEST dd_flag_directory 00:38:27.082 ************************************ 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:27.082 ************************************ 00:38:27.082 START TEST dd_flag_nofollow 00:38:27.082 ************************************ 00:38:27.082 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1121 -- # nofollow 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:27.083 12:20:25 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:27.083 [2024-07-21 12:20:25.885980] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:27.083 [2024-07-21 12:20:25.886172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175571 ] 00:38:27.341 [2024-07-21 12:20:26.034355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.341 [2024-07-21 12:20:26.103553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.600 [2024-07-21 12:20:26.218039] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:27.600 [2024-07-21 12:20:26.218150] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:27.600 [2024-07-21 12:20:26.218189] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:27.600 [2024-07-21 12:20:26.389862] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@648 -- # local es=0 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:27.859 12:20:26 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:27.859 [2024-07-21 12:20:26.573844] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:27.859 [2024-07-21 12:20:26.574089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175582 ] 00:38:28.118 [2024-07-21 12:20:26.734728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.118 [2024-07-21 12:20:26.801308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:28.118 [2024-07-21 12:20:26.915248] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:28.118 [2024-07-21 12:20:26.915357] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:28.118 [2024-07-21 12:20:26.915403] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:28.377 [2024-07-21 12:20:27.088424] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@651 -- # es=216 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@660 -- # es=88 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # case "$es" in 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@668 -- # es=1 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:28.377 12:20:27 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:28.636 [2024-07-21 12:20:27.276419] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:28.636 [2024-07-21 12:20:27.276663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175595 ] 00:38:28.636 [2024-07-21 12:20:27.441040] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:28.894 [2024-07-21 12:20:27.515759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:29.153  Copying: 512/512 [B] (average 500 kBps) 00:38:29.153 00:38:29.153 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 9tstql4gsgv2a1ngyedj9m86jh2ahintio5141qyetrb3294ag74akgbhzv4t92hllswgzfb5cv0arsjf9kutj90oaacu9gievzc69cyvztqp613u7mlip7b4isvkg7wausqft31u8otnm9i8q9zfifq9atg62ehes5ecsuiai63dj5muptzeuuoterh98jb33wa99c7m6fzlrw10ezsqvu28xsn20vemaenrr2okha5q5gp6wjdw123cng396r506db1mxk9z2k8p3qtm04aw0a0a44m3okfwttnz9yt9lel7l5jmt4mknyey6dkthr5855p5622lcv4qpgjk5mqem9aqaugae8tswsecfydaq6wfgxt0kmewhpk6nwa4l19xpr509nycfme5l30bm4wyfn4uy4hulxrn4k427eneqpma3qc1hp0cmd484nph27qtsxmwvgkrf1ijvjdvo21bm95jtp8yqbrcodyn76g9tcnaxl6z6ik03pafkw375o == \9\t\s\t\q\l\4\g\s\g\v\2\a\1\n\g\y\e\d\j\9\m\8\6\j\h\2\a\h\i\n\t\i\o\5\1\4\1\q\y\e\t\r\b\3\2\9\4\a\g\7\4\a\k\g\b\h\z\v\4\t\9\2\h\l\l\s\w\g\z\f\b\5\c\v\0\a\r\s\j\f\9\k\u\t\j\9\0\o\a\a\c\u\9\g\i\e\v\z\c\6\9\c\y\v\z\t\q\p\6\1\3\u\7\m\l\i\p\7\b\4\i\s\v\k\g\7\w\a\u\s\q\f\t\3\1\u\8\o\t\n\m\9\i\8\q\9\z\f\i\f\q\9\a\t\g\6\2\e\h\e\s\5\e\c\s\u\i\a\i\6\3\d\j\5\m\u\p\t\z\e\u\u\o\t\e\r\h\9\8\j\b\3\3\w\a\9\9\c\7\m\6\f\z\l\r\w\1\0\e\z\s\q\v\u\2\8\x\s\n\2\0\v\e\m\a\e\n\r\r\2\o\k\h\a\5\q\5\g\p\6\w\j\d\w\1\2\3\c\n\g\3\9\6\r\5\0\6\d\b\1\m\x\k\9\z\2\k\8\p\3\q\t\m\0\4\a\w\0\a\0\a\4\4\m\3\o\k\f\w\t\t\n\z\9\y\t\9\l\e\l\7\l\5\j\m\t\4\m\k\n\y\e\y\6\d\k\t\h\r\5\8\5\5\p\5\6\2\2\l\c\v\4\q\p\g\j\k\5\m\q\e\m\9\a\q\a\u\g\a\e\8\t\s\w\s\e\c\f\y\d\a\q\6\w\f\g\x\t\0\k\m\e\w\h\p\k\6\n\w\a\4\l\1\9\x\p\r\5\0\9\n\y\c\f\m\e\5\l\3\0\b\m\4\w\y\f\n\4\u\y\4\h\u\l\x\r\n\4\k\4\2\7\e\n\e\q\p\m\a\3\q\c\1\h\p\0\c\m\d\4\8\4\n\p\h\2\7\q\t\s\x\m\w\v\g\k\r\f\1\i\j\v\j\d\v\o\2\1\b\m\9\5\j\t\p\8\y\q\b\r\c\o\d\y\n\7\6\g\9\t\c\n\a\x\l\6\z\6\i\k\0\3\p\a\f\k\w\3\7\5\o ]] 00:38:29.153 00:38:29.153 real 0m2.171s 00:38:29.153 user 0m1.118s 00:38:29.153 sys 0m0.705s 00:38:29.153 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:29.153 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:38:29.153 ************************************ 00:38:29.153 END TEST dd_flag_nofollow 00:38:29.153 ************************************ 00:38:29.411 12:20:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:38:29.411 12:20:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:29.411 12:20:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:29.412 ************************************ 00:38:29.412 START TEST dd_flag_noatime 00:38:29.412 ************************************ 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1121 -- # noatime 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1721564427 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1721564427 00:38:29.412 12:20:28 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:38:30.346 12:20:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:30.347 [2024-07-21 12:20:29.132988] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:30.347 [2024-07-21 12:20:29.133227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175647 ] 00:38:30.605 [2024-07-21 12:20:29.285274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:30.605 [2024-07-21 12:20:29.363520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.122  Copying: 512/512 [B] (average 500 kBps) 00:38:31.122 00:38:31.122 12:20:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:31.122 12:20:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1721564427 )) 00:38:31.122 12:20:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:31.122 12:20:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1721564427 )) 00:38:31.122 12:20:29 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:31.122 [2024-07-21 12:20:29.901818] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:31.122 [2024-07-21 12:20:29.902056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175659 ] 00:38:31.381 [2024-07-21 12:20:30.067232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:31.381 [2024-07-21 12:20:30.141356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:31.897  Copying: 512/512 [B] (average 500 kBps) 00:38:31.897 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1721564430 )) 00:38:31.897 00:38:31.897 real 0m2.559s 00:38:31.897 user 0m0.785s 00:38:31.897 sys 0m0.491s 00:38:31.897 ************************************ 00:38:31.897 END TEST dd_flag_noatime 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:38:31.897 ************************************ 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:31.897 ************************************ 00:38:31.897 START TEST dd_flags_misc 00:38:31.897 ************************************ 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1121 -- # io 00:38:31.897 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:31.898 12:20:30 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:31.898 [2024-07-21 12:20:30.731485] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:31.898 [2024-07-21 12:20:30.731674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175694 ] 00:38:32.155 [2024-07-21 12:20:30.876991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.155 [2024-07-21 12:20:30.945836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.671  Copying: 512/512 [B] (average 500 kBps) 00:38:32.671 00:38:32.672 12:20:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jtmssd36c1wt83p4xrvvjccuiomv0ursspteq8xv1m8amhcm46ezyt2p5xq8wzdxmfol68y231dy49idt52y3rqguexmv7u3nbqgd2l5hin41dyh87ooe1agouenyzoxe494gs7hee9lffdwt3a5uh05z8trpzdlccolz1lclv8t48jpb7chn4fcfun44nw3wcnmibkg17aa9v703otev331x382wok2xkcajhcp4263lnzde86dvxrg5533m05i6eld7wktve4uz4495esayv6ves5rbk2rl4ux111tbnf2tjtphyjmn5ogpqw7u8yb8pkjstugqsgni36gicmx8p26comc3ldu8sdxrkv3gluvqfmg0jlipnqoqkwz1ewoppei7ymhrb3er82vep1hjxriymroims3ahzz3qiwd9u50hr40wa66jn9jhgaktzoa3xcpv7ktpvtf0j9ffpofkxw298rtfquqcie3649qmvcbk386n9ibpnwln68fpoo == \j\t\m\s\s\d\3\6\c\1\w\t\8\3\p\4\x\r\v\v\j\c\c\u\i\o\m\v\0\u\r\s\s\p\t\e\q\8\x\v\1\m\8\a\m\h\c\m\4\6\e\z\y\t\2\p\5\x\q\8\w\z\d\x\m\f\o\l\6\8\y\2\3\1\d\y\4\9\i\d\t\5\2\y\3\r\q\g\u\e\x\m\v\7\u\3\n\b\q\g\d\2\l\5\h\i\n\4\1\d\y\h\8\7\o\o\e\1\a\g\o\u\e\n\y\z\o\x\e\4\9\4\g\s\7\h\e\e\9\l\f\f\d\w\t\3\a\5\u\h\0\5\z\8\t\r\p\z\d\l\c\c\o\l\z\1\l\c\l\v\8\t\4\8\j\p\b\7\c\h\n\4\f\c\f\u\n\4\4\n\w\3\w\c\n\m\i\b\k\g\1\7\a\a\9\v\7\0\3\o\t\e\v\3\3\1\x\3\8\2\w\o\k\2\x\k\c\a\j\h\c\p\4\2\6\3\l\n\z\d\e\8\6\d\v\x\r\g\5\5\3\3\m\0\5\i\6\e\l\d\7\w\k\t\v\e\4\u\z\4\4\9\5\e\s\a\y\v\6\v\e\s\5\r\b\k\2\r\l\4\u\x\1\1\1\t\b\n\f\2\t\j\t\p\h\y\j\m\n\5\o\g\p\q\w\7\u\8\y\b\8\p\k\j\s\t\u\g\q\s\g\n\i\3\6\g\i\c\m\x\8\p\2\6\c\o\m\c\3\l\d\u\8\s\d\x\r\k\v\3\g\l\u\v\q\f\m\g\0\j\l\i\p\n\q\o\q\k\w\z\1\e\w\o\p\p\e\i\7\y\m\h\r\b\3\e\r\8\2\v\e\p\1\h\j\x\r\i\y\m\r\o\i\m\s\3\a\h\z\z\3\q\i\w\d\9\u\5\0\h\r\4\0\w\a\6\6\j\n\9\j\h\g\a\k\t\z\o\a\3\x\c\p\v\7\k\t\p\v\t\f\0\j\9\f\f\p\o\f\k\x\w\2\9\8\r\t\f\q\u\q\c\i\e\3\6\4\9\q\m\v\c\b\k\3\8\6\n\9\i\b\p\n\w\l\n\6\8\f\p\o\o ]] 00:38:32.672 12:20:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:32.672 12:20:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:32.672 [2024-07-21 12:20:31.470804] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:32.672 [2024-07-21 12:20:31.471054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175711 ] 00:38:32.929 [2024-07-21 12:20:31.641753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.929 [2024-07-21 12:20:31.714761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.458  Copying: 512/512 [B] (average 500 kBps) 00:38:33.458 00:38:33.458 12:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jtmssd36c1wt83p4xrvvjccuiomv0ursspteq8xv1m8amhcm46ezyt2p5xq8wzdxmfol68y231dy49idt52y3rqguexmv7u3nbqgd2l5hin41dyh87ooe1agouenyzoxe494gs7hee9lffdwt3a5uh05z8trpzdlccolz1lclv8t48jpb7chn4fcfun44nw3wcnmibkg17aa9v703otev331x382wok2xkcajhcp4263lnzde86dvxrg5533m05i6eld7wktve4uz4495esayv6ves5rbk2rl4ux111tbnf2tjtphyjmn5ogpqw7u8yb8pkjstugqsgni36gicmx8p26comc3ldu8sdxrkv3gluvqfmg0jlipnqoqkwz1ewoppei7ymhrb3er82vep1hjxriymroims3ahzz3qiwd9u50hr40wa66jn9jhgaktzoa3xcpv7ktpvtf0j9ffpofkxw298rtfquqcie3649qmvcbk386n9ibpnwln68fpoo == \j\t\m\s\s\d\3\6\c\1\w\t\8\3\p\4\x\r\v\v\j\c\c\u\i\o\m\v\0\u\r\s\s\p\t\e\q\8\x\v\1\m\8\a\m\h\c\m\4\6\e\z\y\t\2\p\5\x\q\8\w\z\d\x\m\f\o\l\6\8\y\2\3\1\d\y\4\9\i\d\t\5\2\y\3\r\q\g\u\e\x\m\v\7\u\3\n\b\q\g\d\2\l\5\h\i\n\4\1\d\y\h\8\7\o\o\e\1\a\g\o\u\e\n\y\z\o\x\e\4\9\4\g\s\7\h\e\e\9\l\f\f\d\w\t\3\a\5\u\h\0\5\z\8\t\r\p\z\d\l\c\c\o\l\z\1\l\c\l\v\8\t\4\8\j\p\b\7\c\h\n\4\f\c\f\u\n\4\4\n\w\3\w\c\n\m\i\b\k\g\1\7\a\a\9\v\7\0\3\o\t\e\v\3\3\1\x\3\8\2\w\o\k\2\x\k\c\a\j\h\c\p\4\2\6\3\l\n\z\d\e\8\6\d\v\x\r\g\5\5\3\3\m\0\5\i\6\e\l\d\7\w\k\t\v\e\4\u\z\4\4\9\5\e\s\a\y\v\6\v\e\s\5\r\b\k\2\r\l\4\u\x\1\1\1\t\b\n\f\2\t\j\t\p\h\y\j\m\n\5\o\g\p\q\w\7\u\8\y\b\8\p\k\j\s\t\u\g\q\s\g\n\i\3\6\g\i\c\m\x\8\p\2\6\c\o\m\c\3\l\d\u\8\s\d\x\r\k\v\3\g\l\u\v\q\f\m\g\0\j\l\i\p\n\q\o\q\k\w\z\1\e\w\o\p\p\e\i\7\y\m\h\r\b\3\e\r\8\2\v\e\p\1\h\j\x\r\i\y\m\r\o\i\m\s\3\a\h\z\z\3\q\i\w\d\9\u\5\0\h\r\4\0\w\a\6\6\j\n\9\j\h\g\a\k\t\z\o\a\3\x\c\p\v\7\k\t\p\v\t\f\0\j\9\f\f\p\o\f\k\x\w\2\9\8\r\t\f\q\u\q\c\i\e\3\6\4\9\q\m\v\c\b\k\3\8\6\n\9\i\b\p\n\w\l\n\6\8\f\p\o\o ]] 00:38:33.458 12:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:33.458 12:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:33.458 [2024-07-21 12:20:32.165409] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:33.458 [2024-07-21 12:20:32.165659] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175727 ] 00:38:33.715 [2024-07-21 12:20:32.329682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.715 [2024-07-21 12:20:32.397349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.973  Copying: 512/512 [B] (average 100 kBps) 00:38:33.973 00:38:33.973 12:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jtmssd36c1wt83p4xrvvjccuiomv0ursspteq8xv1m8amhcm46ezyt2p5xq8wzdxmfol68y231dy49idt52y3rqguexmv7u3nbqgd2l5hin41dyh87ooe1agouenyzoxe494gs7hee9lffdwt3a5uh05z8trpzdlccolz1lclv8t48jpb7chn4fcfun44nw3wcnmibkg17aa9v703otev331x382wok2xkcajhcp4263lnzde86dvxrg5533m05i6eld7wktve4uz4495esayv6ves5rbk2rl4ux111tbnf2tjtphyjmn5ogpqw7u8yb8pkjstugqsgni36gicmx8p26comc3ldu8sdxrkv3gluvqfmg0jlipnqoqkwz1ewoppei7ymhrb3er82vep1hjxriymroims3ahzz3qiwd9u50hr40wa66jn9jhgaktzoa3xcpv7ktpvtf0j9ffpofkxw298rtfquqcie3649qmvcbk386n9ibpnwln68fpoo == \j\t\m\s\s\d\3\6\c\1\w\t\8\3\p\4\x\r\v\v\j\c\c\u\i\o\m\v\0\u\r\s\s\p\t\e\q\8\x\v\1\m\8\a\m\h\c\m\4\6\e\z\y\t\2\p\5\x\q\8\w\z\d\x\m\f\o\l\6\8\y\2\3\1\d\y\4\9\i\d\t\5\2\y\3\r\q\g\u\e\x\m\v\7\u\3\n\b\q\g\d\2\l\5\h\i\n\4\1\d\y\h\8\7\o\o\e\1\a\g\o\u\e\n\y\z\o\x\e\4\9\4\g\s\7\h\e\e\9\l\f\f\d\w\t\3\a\5\u\h\0\5\z\8\t\r\p\z\d\l\c\c\o\l\z\1\l\c\l\v\8\t\4\8\j\p\b\7\c\h\n\4\f\c\f\u\n\4\4\n\w\3\w\c\n\m\i\b\k\g\1\7\a\a\9\v\7\0\3\o\t\e\v\3\3\1\x\3\8\2\w\o\k\2\x\k\c\a\j\h\c\p\4\2\6\3\l\n\z\d\e\8\6\d\v\x\r\g\5\5\3\3\m\0\5\i\6\e\l\d\7\w\k\t\v\e\4\u\z\4\4\9\5\e\s\a\y\v\6\v\e\s\5\r\b\k\2\r\l\4\u\x\1\1\1\t\b\n\f\2\t\j\t\p\h\y\j\m\n\5\o\g\p\q\w\7\u\8\y\b\8\p\k\j\s\t\u\g\q\s\g\n\i\3\6\g\i\c\m\x\8\p\2\6\c\o\m\c\3\l\d\u\8\s\d\x\r\k\v\3\g\l\u\v\q\f\m\g\0\j\l\i\p\n\q\o\q\k\w\z\1\e\w\o\p\p\e\i\7\y\m\h\r\b\3\e\r\8\2\v\e\p\1\h\j\x\r\i\y\m\r\o\i\m\s\3\a\h\z\z\3\q\i\w\d\9\u\5\0\h\r\4\0\w\a\6\6\j\n\9\j\h\g\a\k\t\z\o\a\3\x\c\p\v\7\k\t\p\v\t\f\0\j\9\f\f\p\o\f\k\x\w\2\9\8\r\t\f\q\u\q\c\i\e\3\6\4\9\q\m\v\c\b\k\3\8\6\n\9\i\b\p\n\w\l\n\6\8\f\p\o\o ]] 00:38:33.973 12:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:33.973 12:20:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:33.973 [2024-07-21 12:20:32.819570] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:33.973 [2024-07-21 12:20:32.819793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175735 ] 00:38:34.230 [2024-07-21 12:20:32.984492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.230 [2024-07-21 12:20:33.043350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.756  Copying: 512/512 [B] (average 250 kBps) 00:38:34.756 00:38:34.756 12:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ jtmssd36c1wt83p4xrvvjccuiomv0ursspteq8xv1m8amhcm46ezyt2p5xq8wzdxmfol68y231dy49idt52y3rqguexmv7u3nbqgd2l5hin41dyh87ooe1agouenyzoxe494gs7hee9lffdwt3a5uh05z8trpzdlccolz1lclv8t48jpb7chn4fcfun44nw3wcnmibkg17aa9v703otev331x382wok2xkcajhcp4263lnzde86dvxrg5533m05i6eld7wktve4uz4495esayv6ves5rbk2rl4ux111tbnf2tjtphyjmn5ogpqw7u8yb8pkjstugqsgni36gicmx8p26comc3ldu8sdxrkv3gluvqfmg0jlipnqoqkwz1ewoppei7ymhrb3er82vep1hjxriymroims3ahzz3qiwd9u50hr40wa66jn9jhgaktzoa3xcpv7ktpvtf0j9ffpofkxw298rtfquqcie3649qmvcbk386n9ibpnwln68fpoo == \j\t\m\s\s\d\3\6\c\1\w\t\8\3\p\4\x\r\v\v\j\c\c\u\i\o\m\v\0\u\r\s\s\p\t\e\q\8\x\v\1\m\8\a\m\h\c\m\4\6\e\z\y\t\2\p\5\x\q\8\w\z\d\x\m\f\o\l\6\8\y\2\3\1\d\y\4\9\i\d\t\5\2\y\3\r\q\g\u\e\x\m\v\7\u\3\n\b\q\g\d\2\l\5\h\i\n\4\1\d\y\h\8\7\o\o\e\1\a\g\o\u\e\n\y\z\o\x\e\4\9\4\g\s\7\h\e\e\9\l\f\f\d\w\t\3\a\5\u\h\0\5\z\8\t\r\p\z\d\l\c\c\o\l\z\1\l\c\l\v\8\t\4\8\j\p\b\7\c\h\n\4\f\c\f\u\n\4\4\n\w\3\w\c\n\m\i\b\k\g\1\7\a\a\9\v\7\0\3\o\t\e\v\3\3\1\x\3\8\2\w\o\k\2\x\k\c\a\j\h\c\p\4\2\6\3\l\n\z\d\e\8\6\d\v\x\r\g\5\5\3\3\m\0\5\i\6\e\l\d\7\w\k\t\v\e\4\u\z\4\4\9\5\e\s\a\y\v\6\v\e\s\5\r\b\k\2\r\l\4\u\x\1\1\1\t\b\n\f\2\t\j\t\p\h\y\j\m\n\5\o\g\p\q\w\7\u\8\y\b\8\p\k\j\s\t\u\g\q\s\g\n\i\3\6\g\i\c\m\x\8\p\2\6\c\o\m\c\3\l\d\u\8\s\d\x\r\k\v\3\g\l\u\v\q\f\m\g\0\j\l\i\p\n\q\o\q\k\w\z\1\e\w\o\p\p\e\i\7\y\m\h\r\b\3\e\r\8\2\v\e\p\1\h\j\x\r\i\y\m\r\o\i\m\s\3\a\h\z\z\3\q\i\w\d\9\u\5\0\h\r\4\0\w\a\6\6\j\n\9\j\h\g\a\k\t\z\o\a\3\x\c\p\v\7\k\t\p\v\t\f\0\j\9\f\f\p\o\f\k\x\w\2\9\8\r\t\f\q\u\q\c\i\e\3\6\4\9\q\m\v\c\b\k\3\8\6\n\9\i\b\p\n\w\l\n\6\8\f\p\o\o ]] 00:38:34.756 12:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:34.756 12:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:38:34.756 12:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:38:34.756 12:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:34.756 12:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:34.756 12:20:33 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:34.756 [2024-07-21 12:20:33.470738] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:34.756 [2024-07-21 12:20:33.471434] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175747 ] 00:38:35.013 [2024-07-21 12:20:33.637667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.013 [2024-07-21 12:20:33.699306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:35.270  Copying: 512/512 [B] (average 500 kBps) 00:38:35.270 00:38:35.270 12:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ oeasinnfkkc5ikpy7iytg5ve45lovw92vwc8ii7hxloi47pu0av5w5ucvdxwx1hwqw2a9jbv4ua0el0j6c9408qraycsa54eshuw0nxd9xufid0bin05boii64p5ivwcil539o0pqlhoz76wq0jgtownkkhm8mtgnux7zvv2d3zswme7ejssot4f13ra74as3h68sibviej8nvq0j8vqfrmoicaav3tte34qzx4as43sz2sd9oa012701e5cv4s4jxkxadtuke25hngefaexnc4472ibd5a6u8m6a9y23mb1un0qpj2ucpzwbd5sgtlzwf9kj3h7xct36pxjudhb4y6fy2du3znsatq4t9afsmgsawdmg2o4uquhjxcv1s6pkfs42y4e5tmn75pzlcl67mi9cw6jh1lc825zhuudq2z58lixaj3dht9leb9ttgp60tmaquecoesr6x4eunsywgk972gnjzdvmbiufvxzenyhvh3yfd4brh8dfxl9ddkr == \o\e\a\s\i\n\n\f\k\k\c\5\i\k\p\y\7\i\y\t\g\5\v\e\4\5\l\o\v\w\9\2\v\w\c\8\i\i\7\h\x\l\o\i\4\7\p\u\0\a\v\5\w\5\u\c\v\d\x\w\x\1\h\w\q\w\2\a\9\j\b\v\4\u\a\0\e\l\0\j\6\c\9\4\0\8\q\r\a\y\c\s\a\5\4\e\s\h\u\w\0\n\x\d\9\x\u\f\i\d\0\b\i\n\0\5\b\o\i\i\6\4\p\5\i\v\w\c\i\l\5\3\9\o\0\p\q\l\h\o\z\7\6\w\q\0\j\g\t\o\w\n\k\k\h\m\8\m\t\g\n\u\x\7\z\v\v\2\d\3\z\s\w\m\e\7\e\j\s\s\o\t\4\f\1\3\r\a\7\4\a\s\3\h\6\8\s\i\b\v\i\e\j\8\n\v\q\0\j\8\v\q\f\r\m\o\i\c\a\a\v\3\t\t\e\3\4\q\z\x\4\a\s\4\3\s\z\2\s\d\9\o\a\0\1\2\7\0\1\e\5\c\v\4\s\4\j\x\k\x\a\d\t\u\k\e\2\5\h\n\g\e\f\a\e\x\n\c\4\4\7\2\i\b\d\5\a\6\u\8\m\6\a\9\y\2\3\m\b\1\u\n\0\q\p\j\2\u\c\p\z\w\b\d\5\s\g\t\l\z\w\f\9\k\j\3\h\7\x\c\t\3\6\p\x\j\u\d\h\b\4\y\6\f\y\2\d\u\3\z\n\s\a\t\q\4\t\9\a\f\s\m\g\s\a\w\d\m\g\2\o\4\u\q\u\h\j\x\c\v\1\s\6\p\k\f\s\4\2\y\4\e\5\t\m\n\7\5\p\z\l\c\l\6\7\m\i\9\c\w\6\j\h\1\l\c\8\2\5\z\h\u\u\d\q\2\z\5\8\l\i\x\a\j\3\d\h\t\9\l\e\b\9\t\t\g\p\6\0\t\m\a\q\u\e\c\o\e\s\r\6\x\4\e\u\n\s\y\w\g\k\9\7\2\g\n\j\z\d\v\m\b\i\u\f\v\x\z\e\n\y\h\v\h\3\y\f\d\4\b\r\h\8\d\f\x\l\9\d\d\k\r ]] 00:38:35.270 12:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:35.270 12:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:35.270 [2024-07-21 12:20:34.105390] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:35.270 [2024-07-21 12:20:34.105629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175764 ] 00:38:35.527 [2024-07-21 12:20:34.255399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:35.527 [2024-07-21 12:20:34.310881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.092  Copying: 512/512 [B] (average 500 kBps) 00:38:36.092 00:38:36.092 12:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ oeasinnfkkc5ikpy7iytg5ve45lovw92vwc8ii7hxloi47pu0av5w5ucvdxwx1hwqw2a9jbv4ua0el0j6c9408qraycsa54eshuw0nxd9xufid0bin05boii64p5ivwcil539o0pqlhoz76wq0jgtownkkhm8mtgnux7zvv2d3zswme7ejssot4f13ra74as3h68sibviej8nvq0j8vqfrmoicaav3tte34qzx4as43sz2sd9oa012701e5cv4s4jxkxadtuke25hngefaexnc4472ibd5a6u8m6a9y23mb1un0qpj2ucpzwbd5sgtlzwf9kj3h7xct36pxjudhb4y6fy2du3znsatq4t9afsmgsawdmg2o4uquhjxcv1s6pkfs42y4e5tmn75pzlcl67mi9cw6jh1lc825zhuudq2z58lixaj3dht9leb9ttgp60tmaquecoesr6x4eunsywgk972gnjzdvmbiufvxzenyhvh3yfd4brh8dfxl9ddkr == \o\e\a\s\i\n\n\f\k\k\c\5\i\k\p\y\7\i\y\t\g\5\v\e\4\5\l\o\v\w\9\2\v\w\c\8\i\i\7\h\x\l\o\i\4\7\p\u\0\a\v\5\w\5\u\c\v\d\x\w\x\1\h\w\q\w\2\a\9\j\b\v\4\u\a\0\e\l\0\j\6\c\9\4\0\8\q\r\a\y\c\s\a\5\4\e\s\h\u\w\0\n\x\d\9\x\u\f\i\d\0\b\i\n\0\5\b\o\i\i\6\4\p\5\i\v\w\c\i\l\5\3\9\o\0\p\q\l\h\o\z\7\6\w\q\0\j\g\t\o\w\n\k\k\h\m\8\m\t\g\n\u\x\7\z\v\v\2\d\3\z\s\w\m\e\7\e\j\s\s\o\t\4\f\1\3\r\a\7\4\a\s\3\h\6\8\s\i\b\v\i\e\j\8\n\v\q\0\j\8\v\q\f\r\m\o\i\c\a\a\v\3\t\t\e\3\4\q\z\x\4\a\s\4\3\s\z\2\s\d\9\o\a\0\1\2\7\0\1\e\5\c\v\4\s\4\j\x\k\x\a\d\t\u\k\e\2\5\h\n\g\e\f\a\e\x\n\c\4\4\7\2\i\b\d\5\a\6\u\8\m\6\a\9\y\2\3\m\b\1\u\n\0\q\p\j\2\u\c\p\z\w\b\d\5\s\g\t\l\z\w\f\9\k\j\3\h\7\x\c\t\3\6\p\x\j\u\d\h\b\4\y\6\f\y\2\d\u\3\z\n\s\a\t\q\4\t\9\a\f\s\m\g\s\a\w\d\m\g\2\o\4\u\q\u\h\j\x\c\v\1\s\6\p\k\f\s\4\2\y\4\e\5\t\m\n\7\5\p\z\l\c\l\6\7\m\i\9\c\w\6\j\h\1\l\c\8\2\5\z\h\u\u\d\q\2\z\5\8\l\i\x\a\j\3\d\h\t\9\l\e\b\9\t\t\g\p\6\0\t\m\a\q\u\e\c\o\e\s\r\6\x\4\e\u\n\s\y\w\g\k\9\7\2\g\n\j\z\d\v\m\b\i\u\f\v\x\z\e\n\y\h\v\h\3\y\f\d\4\b\r\h\8\d\f\x\l\9\d\d\k\r ]] 00:38:36.092 12:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:36.092 12:20:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:36.092 [2024-07-21 12:20:34.729848] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:36.092 [2024-07-21 12:20:34.730116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175769 ] 00:38:36.092 [2024-07-21 12:20:34.898700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.349 [2024-07-21 12:20:34.980270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.607  Copying: 512/512 [B] (average 125 kBps) 00:38:36.607 00:38:36.607 12:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ oeasinnfkkc5ikpy7iytg5ve45lovw92vwc8ii7hxloi47pu0av5w5ucvdxwx1hwqw2a9jbv4ua0el0j6c9408qraycsa54eshuw0nxd9xufid0bin05boii64p5ivwcil539o0pqlhoz76wq0jgtownkkhm8mtgnux7zvv2d3zswme7ejssot4f13ra74as3h68sibviej8nvq0j8vqfrmoicaav3tte34qzx4as43sz2sd9oa012701e5cv4s4jxkxadtuke25hngefaexnc4472ibd5a6u8m6a9y23mb1un0qpj2ucpzwbd5sgtlzwf9kj3h7xct36pxjudhb4y6fy2du3znsatq4t9afsmgsawdmg2o4uquhjxcv1s6pkfs42y4e5tmn75pzlcl67mi9cw6jh1lc825zhuudq2z58lixaj3dht9leb9ttgp60tmaquecoesr6x4eunsywgk972gnjzdvmbiufvxzenyhvh3yfd4brh8dfxl9ddkr == \o\e\a\s\i\n\n\f\k\k\c\5\i\k\p\y\7\i\y\t\g\5\v\e\4\5\l\o\v\w\9\2\v\w\c\8\i\i\7\h\x\l\o\i\4\7\p\u\0\a\v\5\w\5\u\c\v\d\x\w\x\1\h\w\q\w\2\a\9\j\b\v\4\u\a\0\e\l\0\j\6\c\9\4\0\8\q\r\a\y\c\s\a\5\4\e\s\h\u\w\0\n\x\d\9\x\u\f\i\d\0\b\i\n\0\5\b\o\i\i\6\4\p\5\i\v\w\c\i\l\5\3\9\o\0\p\q\l\h\o\z\7\6\w\q\0\j\g\t\o\w\n\k\k\h\m\8\m\t\g\n\u\x\7\z\v\v\2\d\3\z\s\w\m\e\7\e\j\s\s\o\t\4\f\1\3\r\a\7\4\a\s\3\h\6\8\s\i\b\v\i\e\j\8\n\v\q\0\j\8\v\q\f\r\m\o\i\c\a\a\v\3\t\t\e\3\4\q\z\x\4\a\s\4\3\s\z\2\s\d\9\o\a\0\1\2\7\0\1\e\5\c\v\4\s\4\j\x\k\x\a\d\t\u\k\e\2\5\h\n\g\e\f\a\e\x\n\c\4\4\7\2\i\b\d\5\a\6\u\8\m\6\a\9\y\2\3\m\b\1\u\n\0\q\p\j\2\u\c\p\z\w\b\d\5\s\g\t\l\z\w\f\9\k\j\3\h\7\x\c\t\3\6\p\x\j\u\d\h\b\4\y\6\f\y\2\d\u\3\z\n\s\a\t\q\4\t\9\a\f\s\m\g\s\a\w\d\m\g\2\o\4\u\q\u\h\j\x\c\v\1\s\6\p\k\f\s\4\2\y\4\e\5\t\m\n\7\5\p\z\l\c\l\6\7\m\i\9\c\w\6\j\h\1\l\c\8\2\5\z\h\u\u\d\q\2\z\5\8\l\i\x\a\j\3\d\h\t\9\l\e\b\9\t\t\g\p\6\0\t\m\a\q\u\e\c\o\e\s\r\6\x\4\e\u\n\s\y\w\g\k\9\7\2\g\n\j\z\d\v\m\b\i\u\f\v\x\z\e\n\y\h\v\h\3\y\f\d\4\b\r\h\8\d\f\x\l\9\d\d\k\r ]] 00:38:36.607 12:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:36.607 12:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:36.607 [2024-07-21 12:20:35.381709] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:36.607 [2024-07-21 12:20:35.381897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175786 ] 00:38:36.864 [2024-07-21 12:20:35.528949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.864 [2024-07-21 12:20:35.582899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.122  Copying: 512/512 [B] (average 250 kBps) 00:38:37.122 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ oeasinnfkkc5ikpy7iytg5ve45lovw92vwc8ii7hxloi47pu0av5w5ucvdxwx1hwqw2a9jbv4ua0el0j6c9408qraycsa54eshuw0nxd9xufid0bin05boii64p5ivwcil539o0pqlhoz76wq0jgtownkkhm8mtgnux7zvv2d3zswme7ejssot4f13ra74as3h68sibviej8nvq0j8vqfrmoicaav3tte34qzx4as43sz2sd9oa012701e5cv4s4jxkxadtuke25hngefaexnc4472ibd5a6u8m6a9y23mb1un0qpj2ucpzwbd5sgtlzwf9kj3h7xct36pxjudhb4y6fy2du3znsatq4t9afsmgsawdmg2o4uquhjxcv1s6pkfs42y4e5tmn75pzlcl67mi9cw6jh1lc825zhuudq2z58lixaj3dht9leb9ttgp60tmaquecoesr6x4eunsywgk972gnjzdvmbiufvxzenyhvh3yfd4brh8dfxl9ddkr == \o\e\a\s\i\n\n\f\k\k\c\5\i\k\p\y\7\i\y\t\g\5\v\e\4\5\l\o\v\w\9\2\v\w\c\8\i\i\7\h\x\l\o\i\4\7\p\u\0\a\v\5\w\5\u\c\v\d\x\w\x\1\h\w\q\w\2\a\9\j\b\v\4\u\a\0\e\l\0\j\6\c\9\4\0\8\q\r\a\y\c\s\a\5\4\e\s\h\u\w\0\n\x\d\9\x\u\f\i\d\0\b\i\n\0\5\b\o\i\i\6\4\p\5\i\v\w\c\i\l\5\3\9\o\0\p\q\l\h\o\z\7\6\w\q\0\j\g\t\o\w\n\k\k\h\m\8\m\t\g\n\u\x\7\z\v\v\2\d\3\z\s\w\m\e\7\e\j\s\s\o\t\4\f\1\3\r\a\7\4\a\s\3\h\6\8\s\i\b\v\i\e\j\8\n\v\q\0\j\8\v\q\f\r\m\o\i\c\a\a\v\3\t\t\e\3\4\q\z\x\4\a\s\4\3\s\z\2\s\d\9\o\a\0\1\2\7\0\1\e\5\c\v\4\s\4\j\x\k\x\a\d\t\u\k\e\2\5\h\n\g\e\f\a\e\x\n\c\4\4\7\2\i\b\d\5\a\6\u\8\m\6\a\9\y\2\3\m\b\1\u\n\0\q\p\j\2\u\c\p\z\w\b\d\5\s\g\t\l\z\w\f\9\k\j\3\h\7\x\c\t\3\6\p\x\j\u\d\h\b\4\y\6\f\y\2\d\u\3\z\n\s\a\t\q\4\t\9\a\f\s\m\g\s\a\w\d\m\g\2\o\4\u\q\u\h\j\x\c\v\1\s\6\p\k\f\s\4\2\y\4\e\5\t\m\n\7\5\p\z\l\c\l\6\7\m\i\9\c\w\6\j\h\1\l\c\8\2\5\z\h\u\u\d\q\2\z\5\8\l\i\x\a\j\3\d\h\t\9\l\e\b\9\t\t\g\p\6\0\t\m\a\q\u\e\c\o\e\s\r\6\x\4\e\u\n\s\y\w\g\k\9\7\2\g\n\j\z\d\v\m\b\i\u\f\v\x\z\e\n\y\h\v\h\3\y\f\d\4\b\r\h\8\d\f\x\l\9\d\d\k\r ]] 00:38:37.122 00:38:37.122 real 0m5.259s 00:38:37.122 user 0m2.478s 00:38:37.122 sys 0m1.654s 00:38:37.122 ************************************ 00:38:37.122 END TEST dd_flags_misc 00:38:37.122 ************************************ 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:38:37.122 * Second test run, using AIO 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:37.122 12:20:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:37.380 ************************************ 00:38:37.380 START TEST dd_flag_append_forced_aio 00:38:37.380 ************************************ 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1121 -- # append 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=1trj54lq2z0hvp94mp8e3hj6sls0xqke 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=zrnqr9sa8sy4d4pyhrv6qqih7ztcdius 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 1trj54lq2z0hvp94mp8e3hj6sls0xqke 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s zrnqr9sa8sy4d4pyhrv6qqih7ztcdius 00:38:37.380 12:20:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:38:37.380 [2024-07-21 12:20:36.060302] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:37.380 [2024-07-21 12:20:36.060543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175812 ] 00:38:37.380 [2024-07-21 12:20:36.227457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.638 [2024-07-21 12:20:36.294852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:37.896  Copying: 32/32 [B] (average 31 kBps) 00:38:37.896 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ zrnqr9sa8sy4d4pyhrv6qqih7ztcdius1trj54lq2z0hvp94mp8e3hj6sls0xqke == \z\r\n\q\r\9\s\a\8\s\y\4\d\4\p\y\h\r\v\6\q\q\i\h\7\z\t\c\d\i\u\s\1\t\r\j\5\4\l\q\2\z\0\h\v\p\9\4\m\p\8\e\3\h\j\6\s\l\s\0\x\q\k\e ]] 00:38:37.896 00:38:37.896 real 0m0.665s 00:38:37.896 user 0m0.295s 00:38:37.896 sys 0m0.223s 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:37.896 ************************************ 00:38:37.896 END TEST dd_flag_append_forced_aio 00:38:37.896 ************************************ 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:37.896 ************************************ 00:38:37.896 START TEST dd_flag_directory_forced_aio 00:38:37.896 ************************************ 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1121 -- # directory 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:37.896 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:37.897 12:20:36 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:37.897 [2024-07-21 12:20:36.756587] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:37.897 [2024-07-21 12:20:36.756835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175854 ] 00:38:38.155 [2024-07-21 12:20:36.907072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.155 [2024-07-21 12:20:36.962592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.413 [2024-07-21 12:20:37.042216] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:38.413 [2024-07-21 12:20:37.042343] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:38.413 [2024-07-21 12:20:37.042418] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:38.413 [2024-07-21 12:20:37.162309] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:38.413 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:38.414 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:38.414 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:38:38.671 [2024-07-21 12:20:37.320335] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:38.671 [2024-07-21 12:20:37.320584] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175869 ] 00:38:38.671 [2024-07-21 12:20:37.486206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:38.929 [2024-07-21 12:20:37.549346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:38.929 [2024-07-21 12:20:37.631624] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:38.929 [2024-07-21 12:20:37.631750] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:38:38.929 [2024-07-21 12:20:37.631803] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:38.929 [2024-07-21 12:20:37.747993] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@651 -- # es=236 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@660 -- # es=108 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:39.187 00:38:39.187 real 0m1.154s 00:38:39.187 user 0m0.552s 00:38:39.187 sys 0m0.403s 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:39.187 ************************************ 00:38:39.187 END TEST dd_flag_directory_forced_aio 00:38:39.187 ************************************ 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:39.187 12:20:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:39.187 ************************************ 00:38:39.187 START TEST dd_flag_nofollow_forced_aio 00:38:39.187 ************************************ 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1121 -- # nofollow 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:39.188 12:20:37 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:39.188 [2024-07-21 12:20:37.985163] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:39.188 [2024-07-21 12:20:37.985447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175901 ] 00:38:39.446 [2024-07-21 12:20:38.148736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.446 [2024-07-21 12:20:38.207684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:39.446 [2024-07-21 12:20:38.287904] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:39.446 [2024-07-21 12:20:38.288017] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:38:39.446 [2024-07-21 12:20:38.288069] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:39.704 [2024-07-21 12:20:38.404318] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@648 -- # local es=0 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:38:39.704 12:20:38 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:38:39.704 [2024-07-21 12:20:38.552056] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:39.704 [2024-07-21 12:20:38.552275] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175916 ] 00:38:39.962 [2024-07-21 12:20:38.702847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.962 [2024-07-21 12:20:38.757562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:40.233 [2024-07-21 12:20:38.836895] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:40.233 [2024-07-21 12:20:38.837024] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:38:40.233 [2024-07-21 12:20:38.837086] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:40.233 [2024-07-21 12:20:38.953020] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@651 -- # es=216 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@660 -- # es=88 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # case "$es" in 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@668 -- # es=1 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:40.233 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:40.510 [2024-07-21 12:20:39.123659] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:40.511 [2024-07-21 12:20:39.123928] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175919 ] 00:38:40.511 [2024-07-21 12:20:39.291955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:40.511 [2024-07-21 12:20:39.354103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:41.035  Copying: 512/512 [B] (average 500 kBps) 00:38:41.035 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ rn661w4li64pczjz7wbw5clk7sawb23w80l57m9d8xi9yhhltmxmewe2r22ocydn96wyhzizde1z0xdsd0fs1t144e8ehjld0et7tk6csibmxjqbonz1hw3zi02989c8asstr82g9yuqh5hvf4uwb1yrmb27p6u7tkgr890hupbeppfw1mxpbx54xp1gru4un6vxtdn6lxnx76sjd72qwcrxywcj2th6aeqihtcf4bh3akegeitwwoew2ctvre0q2avuc298w8monlai2dcv4vpl0rgnc1kz5mdrdp5nxlzszjpc3gr5ze7b3edodflhfcumyt5yimnh68gltygfnb140umqqkgszvff5q1mvv72jvixek8gi8cbt98bv9t01s3uoomjzf7p915z1wq499sm4ok4kobwpt3xwc0lmt5fk7x4yhfxfpvdgbfjaatq7m9ltt9b6vli5zj16ukxozz44vh0gdvn6q49zuybthbni3ss40fpm3s5mxefjbjc == \r\n\6\6\1\w\4\l\i\6\4\p\c\z\j\z\7\w\b\w\5\c\l\k\7\s\a\w\b\2\3\w\8\0\l\5\7\m\9\d\8\x\i\9\y\h\h\l\t\m\x\m\e\w\e\2\r\2\2\o\c\y\d\n\9\6\w\y\h\z\i\z\d\e\1\z\0\x\d\s\d\0\f\s\1\t\1\4\4\e\8\e\h\j\l\d\0\e\t\7\t\k\6\c\s\i\b\m\x\j\q\b\o\n\z\1\h\w\3\z\i\0\2\9\8\9\c\8\a\s\s\t\r\8\2\g\9\y\u\q\h\5\h\v\f\4\u\w\b\1\y\r\m\b\2\7\p\6\u\7\t\k\g\r\8\9\0\h\u\p\b\e\p\p\f\w\1\m\x\p\b\x\5\4\x\p\1\g\r\u\4\u\n\6\v\x\t\d\n\6\l\x\n\x\7\6\s\j\d\7\2\q\w\c\r\x\y\w\c\j\2\t\h\6\a\e\q\i\h\t\c\f\4\b\h\3\a\k\e\g\e\i\t\w\w\o\e\w\2\c\t\v\r\e\0\q\2\a\v\u\c\2\9\8\w\8\m\o\n\l\a\i\2\d\c\v\4\v\p\l\0\r\g\n\c\1\k\z\5\m\d\r\d\p\5\n\x\l\z\s\z\j\p\c\3\g\r\5\z\e\7\b\3\e\d\o\d\f\l\h\f\c\u\m\y\t\5\y\i\m\n\h\6\8\g\l\t\y\g\f\n\b\1\4\0\u\m\q\q\k\g\s\z\v\f\f\5\q\1\m\v\v\7\2\j\v\i\x\e\k\8\g\i\8\c\b\t\9\8\b\v\9\t\0\1\s\3\u\o\o\m\j\z\f\7\p\9\1\5\z\1\w\q\4\9\9\s\m\4\o\k\4\k\o\b\w\p\t\3\x\w\c\0\l\m\t\5\f\k\7\x\4\y\h\f\x\f\p\v\d\g\b\f\j\a\a\t\q\7\m\9\l\t\t\9\b\6\v\l\i\5\z\j\1\6\u\k\x\o\z\z\4\4\v\h\0\g\d\v\n\6\q\4\9\z\u\y\b\t\h\b\n\i\3\s\s\4\0\f\p\m\3\s\5\m\x\e\f\j\b\j\c ]] 00:38:41.035 00:38:41.035 real 0m1.785s 00:38:41.035 user 0m0.858s 00:38:41.035 sys 0m0.593s 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:41.035 ************************************ 00:38:41.035 END TEST dd_flag_nofollow_forced_aio 00:38:41.035 ************************************ 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:41.035 ************************************ 00:38:41.035 START TEST dd_flag_noatime_forced_aio 00:38:41.035 ************************************ 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1121 -- # noatime 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1721564439 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1721564439 00:38:41.035 12:20:39 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:38:41.970 12:20:40 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:41.970 [2024-07-21 12:20:40.836815] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:41.970 [2024-07-21 12:20:40.837331] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175975 ] 00:38:42.228 [2024-07-21 12:20:41.006833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.228 [2024-07-21 12:20:41.065590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:42.745  Copying: 512/512 [B] (average 500 kBps) 00:38:42.745 00:38:42.745 12:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:42.745 12:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1721564439 )) 00:38:42.745 12:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:42.745 12:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1721564439 )) 00:38:42.745 12:20:41 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:42.745 [2024-07-21 12:20:41.499735] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:42.746 [2024-07-21 12:20:41.500187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid175990 ] 00:38:43.004 [2024-07-21 12:20:41.665661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.004 [2024-07-21 12:20:41.728920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.263  Copying: 512/512 [B] (average 500 kBps) 00:38:43.263 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:43.263 ************************************ 00:38:43.263 END TEST dd_flag_noatime_forced_aio 00:38:43.263 ************************************ 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1721564441 )) 00:38:43.263 00:38:43.263 real 0m2.331s 00:38:43.263 user 0m0.672s 00:38:43.263 sys 0m0.378s 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:43.263 12:20:42 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:43.522 ************************************ 00:38:43.522 START TEST dd_flags_misc_forced_aio 00:38:43.522 ************************************ 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1121 -- # io 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:43.522 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:43.522 [2024-07-21 12:20:42.195863] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:43.522 [2024-07-21 12:20:42.196136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176020 ] 00:38:43.522 [2024-07-21 12:20:42.345125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:43.781 [2024-07-21 12:20:42.399818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.040  Copying: 512/512 [B] (average 500 kBps) 00:38:44.040 00:38:44.040 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gv1ilr6felygw9hait9m3jise20mywg483nmffruytxvifb2v6tek9tbo8wh77yi6t9v9zrcwj7d1m4ku677subluxu6jfl3rykqx1kkk2vxbrha7g2fg4lgufhtlkfai47kdfvze7x8870igx1z25igbv9u91tqixz45vhnzlb6g7rcu5or56wfpeon94l536qkp4qiguyjmnuq5620f3g0ttvstqs5gs1obdxh6lfkw410emhftefvrmqqb7jhpvjoq8mbz1iyljvoe4olcs42cphrxxh3tckzar5ab90ue9imv4f8ggf4nvldf4wf3o4wkmo8t789pqr1blefag6io04hsk2cjbmq62fq1p9xwpj2cjsv2eyd8ah3lha10dpt4i5oa52ha561jt1v241spq4xsifcom7vjc7xl7mpe3zumr39p3weo6r9co4u060ms23gkixaagzsvfgmjf1soa6taszllvgwdh7g3pd4awia7b4b48feqrsdb2je == \g\v\1\i\l\r\6\f\e\l\y\g\w\9\h\a\i\t\9\m\3\j\i\s\e\2\0\m\y\w\g\4\8\3\n\m\f\f\r\u\y\t\x\v\i\f\b\2\v\6\t\e\k\9\t\b\o\8\w\h\7\7\y\i\6\t\9\v\9\z\r\c\w\j\7\d\1\m\4\k\u\6\7\7\s\u\b\l\u\x\u\6\j\f\l\3\r\y\k\q\x\1\k\k\k\2\v\x\b\r\h\a\7\g\2\f\g\4\l\g\u\f\h\t\l\k\f\a\i\4\7\k\d\f\v\z\e\7\x\8\8\7\0\i\g\x\1\z\2\5\i\g\b\v\9\u\9\1\t\q\i\x\z\4\5\v\h\n\z\l\b\6\g\7\r\c\u\5\o\r\5\6\w\f\p\e\o\n\9\4\l\5\3\6\q\k\p\4\q\i\g\u\y\j\m\n\u\q\5\6\2\0\f\3\g\0\t\t\v\s\t\q\s\5\g\s\1\o\b\d\x\h\6\l\f\k\w\4\1\0\e\m\h\f\t\e\f\v\r\m\q\q\b\7\j\h\p\v\j\o\q\8\m\b\z\1\i\y\l\j\v\o\e\4\o\l\c\s\4\2\c\p\h\r\x\x\h\3\t\c\k\z\a\r\5\a\b\9\0\u\e\9\i\m\v\4\f\8\g\g\f\4\n\v\l\d\f\4\w\f\3\o\4\w\k\m\o\8\t\7\8\9\p\q\r\1\b\l\e\f\a\g\6\i\o\0\4\h\s\k\2\c\j\b\m\q\6\2\f\q\1\p\9\x\w\p\j\2\c\j\s\v\2\e\y\d\8\a\h\3\l\h\a\1\0\d\p\t\4\i\5\o\a\5\2\h\a\5\6\1\j\t\1\v\2\4\1\s\p\q\4\x\s\i\f\c\o\m\7\v\j\c\7\x\l\7\m\p\e\3\z\u\m\r\3\9\p\3\w\e\o\6\r\9\c\o\4\u\0\6\0\m\s\2\3\g\k\i\x\a\a\g\z\s\v\f\g\m\j\f\1\s\o\a\6\t\a\s\z\l\l\v\g\w\d\h\7\g\3\p\d\4\a\w\i\a\7\b\4\b\4\8\f\e\q\r\s\d\b\2\j\e ]] 00:38:44.040 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:44.040 12:20:42 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:44.040 [2024-07-21 12:20:42.808659] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:44.040 [2024-07-21 12:20:42.808902] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176035 ] 00:38:44.305 [2024-07-21 12:20:42.974824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.305 [2024-07-21 12:20:43.033228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:44.566  Copying: 512/512 [B] (average 500 kBps) 00:38:44.566 00:38:44.566 12:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gv1ilr6felygw9hait9m3jise20mywg483nmffruytxvifb2v6tek9tbo8wh77yi6t9v9zrcwj7d1m4ku677subluxu6jfl3rykqx1kkk2vxbrha7g2fg4lgufhtlkfai47kdfvze7x8870igx1z25igbv9u91tqixz45vhnzlb6g7rcu5or56wfpeon94l536qkp4qiguyjmnuq5620f3g0ttvstqs5gs1obdxh6lfkw410emhftefvrmqqb7jhpvjoq8mbz1iyljvoe4olcs42cphrxxh3tckzar5ab90ue9imv4f8ggf4nvldf4wf3o4wkmo8t789pqr1blefag6io04hsk2cjbmq62fq1p9xwpj2cjsv2eyd8ah3lha10dpt4i5oa52ha561jt1v241spq4xsifcom7vjc7xl7mpe3zumr39p3weo6r9co4u060ms23gkixaagzsvfgmjf1soa6taszllvgwdh7g3pd4awia7b4b48feqrsdb2je == \g\v\1\i\l\r\6\f\e\l\y\g\w\9\h\a\i\t\9\m\3\j\i\s\e\2\0\m\y\w\g\4\8\3\n\m\f\f\r\u\y\t\x\v\i\f\b\2\v\6\t\e\k\9\t\b\o\8\w\h\7\7\y\i\6\t\9\v\9\z\r\c\w\j\7\d\1\m\4\k\u\6\7\7\s\u\b\l\u\x\u\6\j\f\l\3\r\y\k\q\x\1\k\k\k\2\v\x\b\r\h\a\7\g\2\f\g\4\l\g\u\f\h\t\l\k\f\a\i\4\7\k\d\f\v\z\e\7\x\8\8\7\0\i\g\x\1\z\2\5\i\g\b\v\9\u\9\1\t\q\i\x\z\4\5\v\h\n\z\l\b\6\g\7\r\c\u\5\o\r\5\6\w\f\p\e\o\n\9\4\l\5\3\6\q\k\p\4\q\i\g\u\y\j\m\n\u\q\5\6\2\0\f\3\g\0\t\t\v\s\t\q\s\5\g\s\1\o\b\d\x\h\6\l\f\k\w\4\1\0\e\m\h\f\t\e\f\v\r\m\q\q\b\7\j\h\p\v\j\o\q\8\m\b\z\1\i\y\l\j\v\o\e\4\o\l\c\s\4\2\c\p\h\r\x\x\h\3\t\c\k\z\a\r\5\a\b\9\0\u\e\9\i\m\v\4\f\8\g\g\f\4\n\v\l\d\f\4\w\f\3\o\4\w\k\m\o\8\t\7\8\9\p\q\r\1\b\l\e\f\a\g\6\i\o\0\4\h\s\k\2\c\j\b\m\q\6\2\f\q\1\p\9\x\w\p\j\2\c\j\s\v\2\e\y\d\8\a\h\3\l\h\a\1\0\d\p\t\4\i\5\o\a\5\2\h\a\5\6\1\j\t\1\v\2\4\1\s\p\q\4\x\s\i\f\c\o\m\7\v\j\c\7\x\l\7\m\p\e\3\z\u\m\r\3\9\p\3\w\e\o\6\r\9\c\o\4\u\0\6\0\m\s\2\3\g\k\i\x\a\a\g\z\s\v\f\g\m\j\f\1\s\o\a\6\t\a\s\z\l\l\v\g\w\d\h\7\g\3\p\d\4\a\w\i\a\7\b\4\b\4\8\f\e\q\r\s\d\b\2\j\e ]] 00:38:44.566 12:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:44.566 12:20:43 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:44.824 [2024-07-21 12:20:43.440507] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:44.824 [2024-07-21 12:20:43.440740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176049 ] 00:38:44.824 [2024-07-21 12:20:43.607066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:44.824 [2024-07-21 12:20:43.660159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.350  Copying: 512/512 [B] (average 100 kBps) 00:38:45.350 00:38:45.350 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gv1ilr6felygw9hait9m3jise20mywg483nmffruytxvifb2v6tek9tbo8wh77yi6t9v9zrcwj7d1m4ku677subluxu6jfl3rykqx1kkk2vxbrha7g2fg4lgufhtlkfai47kdfvze7x8870igx1z25igbv9u91tqixz45vhnzlb6g7rcu5or56wfpeon94l536qkp4qiguyjmnuq5620f3g0ttvstqs5gs1obdxh6lfkw410emhftefvrmqqb7jhpvjoq8mbz1iyljvoe4olcs42cphrxxh3tckzar5ab90ue9imv4f8ggf4nvldf4wf3o4wkmo8t789pqr1blefag6io04hsk2cjbmq62fq1p9xwpj2cjsv2eyd8ah3lha10dpt4i5oa52ha561jt1v241spq4xsifcom7vjc7xl7mpe3zumr39p3weo6r9co4u060ms23gkixaagzsvfgmjf1soa6taszllvgwdh7g3pd4awia7b4b48feqrsdb2je == \g\v\1\i\l\r\6\f\e\l\y\g\w\9\h\a\i\t\9\m\3\j\i\s\e\2\0\m\y\w\g\4\8\3\n\m\f\f\r\u\y\t\x\v\i\f\b\2\v\6\t\e\k\9\t\b\o\8\w\h\7\7\y\i\6\t\9\v\9\z\r\c\w\j\7\d\1\m\4\k\u\6\7\7\s\u\b\l\u\x\u\6\j\f\l\3\r\y\k\q\x\1\k\k\k\2\v\x\b\r\h\a\7\g\2\f\g\4\l\g\u\f\h\t\l\k\f\a\i\4\7\k\d\f\v\z\e\7\x\8\8\7\0\i\g\x\1\z\2\5\i\g\b\v\9\u\9\1\t\q\i\x\z\4\5\v\h\n\z\l\b\6\g\7\r\c\u\5\o\r\5\6\w\f\p\e\o\n\9\4\l\5\3\6\q\k\p\4\q\i\g\u\y\j\m\n\u\q\5\6\2\0\f\3\g\0\t\t\v\s\t\q\s\5\g\s\1\o\b\d\x\h\6\l\f\k\w\4\1\0\e\m\h\f\t\e\f\v\r\m\q\q\b\7\j\h\p\v\j\o\q\8\m\b\z\1\i\y\l\j\v\o\e\4\o\l\c\s\4\2\c\p\h\r\x\x\h\3\t\c\k\z\a\r\5\a\b\9\0\u\e\9\i\m\v\4\f\8\g\g\f\4\n\v\l\d\f\4\w\f\3\o\4\w\k\m\o\8\t\7\8\9\p\q\r\1\b\l\e\f\a\g\6\i\o\0\4\h\s\k\2\c\j\b\m\q\6\2\f\q\1\p\9\x\w\p\j\2\c\j\s\v\2\e\y\d\8\a\h\3\l\h\a\1\0\d\p\t\4\i\5\o\a\5\2\h\a\5\6\1\j\t\1\v\2\4\1\s\p\q\4\x\s\i\f\c\o\m\7\v\j\c\7\x\l\7\m\p\e\3\z\u\m\r\3\9\p\3\w\e\o\6\r\9\c\o\4\u\0\6\0\m\s\2\3\g\k\i\x\a\a\g\z\s\v\f\g\m\j\f\1\s\o\a\6\t\a\s\z\l\l\v\g\w\d\h\7\g\3\p\d\4\a\w\i\a\7\b\4\b\4\8\f\e\q\r\s\d\b\2\j\e ]] 00:38:45.350 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:45.350 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:45.350 [2024-07-21 12:20:44.080827] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:45.350 [2024-07-21 12:20:44.081075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176059 ] 00:38:45.607 [2024-07-21 12:20:44.247371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.607 [2024-07-21 12:20:44.314563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.866  Copying: 512/512 [B] (average 166 kBps) 00:38:45.866 00:38:45.866 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ gv1ilr6felygw9hait9m3jise20mywg483nmffruytxvifb2v6tek9tbo8wh77yi6t9v9zrcwj7d1m4ku677subluxu6jfl3rykqx1kkk2vxbrha7g2fg4lgufhtlkfai47kdfvze7x8870igx1z25igbv9u91tqixz45vhnzlb6g7rcu5or56wfpeon94l536qkp4qiguyjmnuq5620f3g0ttvstqs5gs1obdxh6lfkw410emhftefvrmqqb7jhpvjoq8mbz1iyljvoe4olcs42cphrxxh3tckzar5ab90ue9imv4f8ggf4nvldf4wf3o4wkmo8t789pqr1blefag6io04hsk2cjbmq62fq1p9xwpj2cjsv2eyd8ah3lha10dpt4i5oa52ha561jt1v241spq4xsifcom7vjc7xl7mpe3zumr39p3weo6r9co4u060ms23gkixaagzsvfgmjf1soa6taszllvgwdh7g3pd4awia7b4b48feqrsdb2je == \g\v\1\i\l\r\6\f\e\l\y\g\w\9\h\a\i\t\9\m\3\j\i\s\e\2\0\m\y\w\g\4\8\3\n\m\f\f\r\u\y\t\x\v\i\f\b\2\v\6\t\e\k\9\t\b\o\8\w\h\7\7\y\i\6\t\9\v\9\z\r\c\w\j\7\d\1\m\4\k\u\6\7\7\s\u\b\l\u\x\u\6\j\f\l\3\r\y\k\q\x\1\k\k\k\2\v\x\b\r\h\a\7\g\2\f\g\4\l\g\u\f\h\t\l\k\f\a\i\4\7\k\d\f\v\z\e\7\x\8\8\7\0\i\g\x\1\z\2\5\i\g\b\v\9\u\9\1\t\q\i\x\z\4\5\v\h\n\z\l\b\6\g\7\r\c\u\5\o\r\5\6\w\f\p\e\o\n\9\4\l\5\3\6\q\k\p\4\q\i\g\u\y\j\m\n\u\q\5\6\2\0\f\3\g\0\t\t\v\s\t\q\s\5\g\s\1\o\b\d\x\h\6\l\f\k\w\4\1\0\e\m\h\f\t\e\f\v\r\m\q\q\b\7\j\h\p\v\j\o\q\8\m\b\z\1\i\y\l\j\v\o\e\4\o\l\c\s\4\2\c\p\h\r\x\x\h\3\t\c\k\z\a\r\5\a\b\9\0\u\e\9\i\m\v\4\f\8\g\g\f\4\n\v\l\d\f\4\w\f\3\o\4\w\k\m\o\8\t\7\8\9\p\q\r\1\b\l\e\f\a\g\6\i\o\0\4\h\s\k\2\c\j\b\m\q\6\2\f\q\1\p\9\x\w\p\j\2\c\j\s\v\2\e\y\d\8\a\h\3\l\h\a\1\0\d\p\t\4\i\5\o\a\5\2\h\a\5\6\1\j\t\1\v\2\4\1\s\p\q\4\x\s\i\f\c\o\m\7\v\j\c\7\x\l\7\m\p\e\3\z\u\m\r\3\9\p\3\w\e\o\6\r\9\c\o\4\u\0\6\0\m\s\2\3\g\k\i\x\a\a\g\z\s\v\f\g\m\j\f\1\s\o\a\6\t\a\s\z\l\l\v\g\w\d\h\7\g\3\p\d\4\a\w\i\a\7\b\4\b\4\8\f\e\q\r\s\d\b\2\j\e ]] 00:38:45.866 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:38:45.866 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:38:45.866 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:38:45.866 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:45.866 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:45.866 12:20:44 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:38:46.124 [2024-07-21 12:20:44.735978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:46.124 [2024-07-21 12:20:44.736195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176071 ] 00:38:46.124 [2024-07-21 12:20:44.884351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.124 [2024-07-21 12:20:44.938213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.640  Copying: 512/512 [B] (average 500 kBps) 00:38:46.640 00:38:46.640 12:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uvpfmknled3bks3e365yl87t93ak9gl2i450vqzixgnh49bzbddxmnxryuw8p4w0to3fypo6o3c7ryar3pf1vr3u3h3gm1ejdmedh2j4yx1p25f4kc4eyjgjv16286uedec8h37ez0smgfahz7szysv74cm3vkrrvx8kosbsauddfpfx3nghy7yrt4xdkxp6thb7c0gkcez6d20dacqcnjmntxlo3q7uvzjv6hm9nn5hvm1sj0aaqtpvvj9ge3ibc4ilq9gy6ovaf7hpzmzhqzpk1oay6di3anz3kxfp5j0n3iowee99io1fqaq3diengpnoriidc5rthqq1veu7k7ld8cqi51kfgjk1q2t8fgxn1t09vignb2kogl18wv1d5mp3fgnj1pcmm3ao82844zjenmtq4z06gzlq42av7gpttzmsvg2id32gg902eepxdrywb8062p2t86t7jadqnqclby25zwv6kci5ovxdmu15eavkg9owee6umc8m4y5m == \u\v\p\f\m\k\n\l\e\d\3\b\k\s\3\e\3\6\5\y\l\8\7\t\9\3\a\k\9\g\l\2\i\4\5\0\v\q\z\i\x\g\n\h\4\9\b\z\b\d\d\x\m\n\x\r\y\u\w\8\p\4\w\0\t\o\3\f\y\p\o\6\o\3\c\7\r\y\a\r\3\p\f\1\v\r\3\u\3\h\3\g\m\1\e\j\d\m\e\d\h\2\j\4\y\x\1\p\2\5\f\4\k\c\4\e\y\j\g\j\v\1\6\2\8\6\u\e\d\e\c\8\h\3\7\e\z\0\s\m\g\f\a\h\z\7\s\z\y\s\v\7\4\c\m\3\v\k\r\r\v\x\8\k\o\s\b\s\a\u\d\d\f\p\f\x\3\n\g\h\y\7\y\r\t\4\x\d\k\x\p\6\t\h\b\7\c\0\g\k\c\e\z\6\d\2\0\d\a\c\q\c\n\j\m\n\t\x\l\o\3\q\7\u\v\z\j\v\6\h\m\9\n\n\5\h\v\m\1\s\j\0\a\a\q\t\p\v\v\j\9\g\e\3\i\b\c\4\i\l\q\9\g\y\6\o\v\a\f\7\h\p\z\m\z\h\q\z\p\k\1\o\a\y\6\d\i\3\a\n\z\3\k\x\f\p\5\j\0\n\3\i\o\w\e\e\9\9\i\o\1\f\q\a\q\3\d\i\e\n\g\p\n\o\r\i\i\d\c\5\r\t\h\q\q\1\v\e\u\7\k\7\l\d\8\c\q\i\5\1\k\f\g\j\k\1\q\2\t\8\f\g\x\n\1\t\0\9\v\i\g\n\b\2\k\o\g\l\1\8\w\v\1\d\5\m\p\3\f\g\n\j\1\p\c\m\m\3\a\o\8\2\8\4\4\z\j\e\n\m\t\q\4\z\0\6\g\z\l\q\4\2\a\v\7\g\p\t\t\z\m\s\v\g\2\i\d\3\2\g\g\9\0\2\e\e\p\x\d\r\y\w\b\8\0\6\2\p\2\t\8\6\t\7\j\a\d\q\n\q\c\l\b\y\2\5\z\w\v\6\k\c\i\5\o\v\x\d\m\u\1\5\e\a\v\k\g\9\o\w\e\e\6\u\m\c\8\m\4\y\5\m ]] 00:38:46.640 12:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:46.640 12:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:38:46.640 [2024-07-21 12:20:45.319217] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:46.640 [2024-07-21 12:20:45.319382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176088 ] 00:38:46.640 [2024-07-21 12:20:45.468159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.898 [2024-07-21 12:20:45.523193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.156  Copying: 512/512 [B] (average 500 kBps) 00:38:47.156 00:38:47.156 12:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uvpfmknled3bks3e365yl87t93ak9gl2i450vqzixgnh49bzbddxmnxryuw8p4w0to3fypo6o3c7ryar3pf1vr3u3h3gm1ejdmedh2j4yx1p25f4kc4eyjgjv16286uedec8h37ez0smgfahz7szysv74cm3vkrrvx8kosbsauddfpfx3nghy7yrt4xdkxp6thb7c0gkcez6d20dacqcnjmntxlo3q7uvzjv6hm9nn5hvm1sj0aaqtpvvj9ge3ibc4ilq9gy6ovaf7hpzmzhqzpk1oay6di3anz3kxfp5j0n3iowee99io1fqaq3diengpnoriidc5rthqq1veu7k7ld8cqi51kfgjk1q2t8fgxn1t09vignb2kogl18wv1d5mp3fgnj1pcmm3ao82844zjenmtq4z06gzlq42av7gpttzmsvg2id32gg902eepxdrywb8062p2t86t7jadqnqclby25zwv6kci5ovxdmu15eavkg9owee6umc8m4y5m == \u\v\p\f\m\k\n\l\e\d\3\b\k\s\3\e\3\6\5\y\l\8\7\t\9\3\a\k\9\g\l\2\i\4\5\0\v\q\z\i\x\g\n\h\4\9\b\z\b\d\d\x\m\n\x\r\y\u\w\8\p\4\w\0\t\o\3\f\y\p\o\6\o\3\c\7\r\y\a\r\3\p\f\1\v\r\3\u\3\h\3\g\m\1\e\j\d\m\e\d\h\2\j\4\y\x\1\p\2\5\f\4\k\c\4\e\y\j\g\j\v\1\6\2\8\6\u\e\d\e\c\8\h\3\7\e\z\0\s\m\g\f\a\h\z\7\s\z\y\s\v\7\4\c\m\3\v\k\r\r\v\x\8\k\o\s\b\s\a\u\d\d\f\p\f\x\3\n\g\h\y\7\y\r\t\4\x\d\k\x\p\6\t\h\b\7\c\0\g\k\c\e\z\6\d\2\0\d\a\c\q\c\n\j\m\n\t\x\l\o\3\q\7\u\v\z\j\v\6\h\m\9\n\n\5\h\v\m\1\s\j\0\a\a\q\t\p\v\v\j\9\g\e\3\i\b\c\4\i\l\q\9\g\y\6\o\v\a\f\7\h\p\z\m\z\h\q\z\p\k\1\o\a\y\6\d\i\3\a\n\z\3\k\x\f\p\5\j\0\n\3\i\o\w\e\e\9\9\i\o\1\f\q\a\q\3\d\i\e\n\g\p\n\o\r\i\i\d\c\5\r\t\h\q\q\1\v\e\u\7\k\7\l\d\8\c\q\i\5\1\k\f\g\j\k\1\q\2\t\8\f\g\x\n\1\t\0\9\v\i\g\n\b\2\k\o\g\l\1\8\w\v\1\d\5\m\p\3\f\g\n\j\1\p\c\m\m\3\a\o\8\2\8\4\4\z\j\e\n\m\t\q\4\z\0\6\g\z\l\q\4\2\a\v\7\g\p\t\t\z\m\s\v\g\2\i\d\3\2\g\g\9\0\2\e\e\p\x\d\r\y\w\b\8\0\6\2\p\2\t\8\6\t\7\j\a\d\q\n\q\c\l\b\y\2\5\z\w\v\6\k\c\i\5\o\v\x\d\m\u\1\5\e\a\v\k\g\9\o\w\e\e\6\u\m\c\8\m\4\y\5\m ]] 00:38:47.156 12:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:47.156 12:20:45 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:38:47.156 [2024-07-21 12:20:45.916937] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:47.156 [2024-07-21 12:20:45.917102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176093 ] 00:38:47.414 [2024-07-21 12:20:46.065678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.414 [2024-07-21 12:20:46.120542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:47.686  Copying: 512/512 [B] (average 250 kBps) 00:38:47.686 00:38:47.687 12:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uvpfmknled3bks3e365yl87t93ak9gl2i450vqzixgnh49bzbddxmnxryuw8p4w0to3fypo6o3c7ryar3pf1vr3u3h3gm1ejdmedh2j4yx1p25f4kc4eyjgjv16286uedec8h37ez0smgfahz7szysv74cm3vkrrvx8kosbsauddfpfx3nghy7yrt4xdkxp6thb7c0gkcez6d20dacqcnjmntxlo3q7uvzjv6hm9nn5hvm1sj0aaqtpvvj9ge3ibc4ilq9gy6ovaf7hpzmzhqzpk1oay6di3anz3kxfp5j0n3iowee99io1fqaq3diengpnoriidc5rthqq1veu7k7ld8cqi51kfgjk1q2t8fgxn1t09vignb2kogl18wv1d5mp3fgnj1pcmm3ao82844zjenmtq4z06gzlq42av7gpttzmsvg2id32gg902eepxdrywb8062p2t86t7jadqnqclby25zwv6kci5ovxdmu15eavkg9owee6umc8m4y5m == \u\v\p\f\m\k\n\l\e\d\3\b\k\s\3\e\3\6\5\y\l\8\7\t\9\3\a\k\9\g\l\2\i\4\5\0\v\q\z\i\x\g\n\h\4\9\b\z\b\d\d\x\m\n\x\r\y\u\w\8\p\4\w\0\t\o\3\f\y\p\o\6\o\3\c\7\r\y\a\r\3\p\f\1\v\r\3\u\3\h\3\g\m\1\e\j\d\m\e\d\h\2\j\4\y\x\1\p\2\5\f\4\k\c\4\e\y\j\g\j\v\1\6\2\8\6\u\e\d\e\c\8\h\3\7\e\z\0\s\m\g\f\a\h\z\7\s\z\y\s\v\7\4\c\m\3\v\k\r\r\v\x\8\k\o\s\b\s\a\u\d\d\f\p\f\x\3\n\g\h\y\7\y\r\t\4\x\d\k\x\p\6\t\h\b\7\c\0\g\k\c\e\z\6\d\2\0\d\a\c\q\c\n\j\m\n\t\x\l\o\3\q\7\u\v\z\j\v\6\h\m\9\n\n\5\h\v\m\1\s\j\0\a\a\q\t\p\v\v\j\9\g\e\3\i\b\c\4\i\l\q\9\g\y\6\o\v\a\f\7\h\p\z\m\z\h\q\z\p\k\1\o\a\y\6\d\i\3\a\n\z\3\k\x\f\p\5\j\0\n\3\i\o\w\e\e\9\9\i\o\1\f\q\a\q\3\d\i\e\n\g\p\n\o\r\i\i\d\c\5\r\t\h\q\q\1\v\e\u\7\k\7\l\d\8\c\q\i\5\1\k\f\g\j\k\1\q\2\t\8\f\g\x\n\1\t\0\9\v\i\g\n\b\2\k\o\g\l\1\8\w\v\1\d\5\m\p\3\f\g\n\j\1\p\c\m\m\3\a\o\8\2\8\4\4\z\j\e\n\m\t\q\4\z\0\6\g\z\l\q\4\2\a\v\7\g\p\t\t\z\m\s\v\g\2\i\d\3\2\g\g\9\0\2\e\e\p\x\d\r\y\w\b\8\0\6\2\p\2\t\8\6\t\7\j\a\d\q\n\q\c\l\b\y\2\5\z\w\v\6\k\c\i\5\o\v\x\d\m\u\1\5\e\a\v\k\g\9\o\w\e\e\6\u\m\c\8\m\4\y\5\m ]] 00:38:47.687 12:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:38:47.687 12:20:46 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:38:47.687 [2024-07-21 12:20:46.526884] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:47.687 [2024-07-21 12:20:46.527061] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176109 ] 00:38:47.944 [2024-07-21 12:20:46.676432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:47.944 [2024-07-21 12:20:46.731040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:48.460  Copying: 512/512 [B] (average 166 kBps) 00:38:48.460 00:38:48.460 ************************************ 00:38:48.460 END TEST dd_flags_misc_forced_aio 00:38:48.460 ************************************ 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ uvpfmknled3bks3e365yl87t93ak9gl2i450vqzixgnh49bzbddxmnxryuw8p4w0to3fypo6o3c7ryar3pf1vr3u3h3gm1ejdmedh2j4yx1p25f4kc4eyjgjv16286uedec8h37ez0smgfahz7szysv74cm3vkrrvx8kosbsauddfpfx3nghy7yrt4xdkxp6thb7c0gkcez6d20dacqcnjmntxlo3q7uvzjv6hm9nn5hvm1sj0aaqtpvvj9ge3ibc4ilq9gy6ovaf7hpzmzhqzpk1oay6di3anz3kxfp5j0n3iowee99io1fqaq3diengpnoriidc5rthqq1veu7k7ld8cqi51kfgjk1q2t8fgxn1t09vignb2kogl18wv1d5mp3fgnj1pcmm3ao82844zjenmtq4z06gzlq42av7gpttzmsvg2id32gg902eepxdrywb8062p2t86t7jadqnqclby25zwv6kci5ovxdmu15eavkg9owee6umc8m4y5m == \u\v\p\f\m\k\n\l\e\d\3\b\k\s\3\e\3\6\5\y\l\8\7\t\9\3\a\k\9\g\l\2\i\4\5\0\v\q\z\i\x\g\n\h\4\9\b\z\b\d\d\x\m\n\x\r\y\u\w\8\p\4\w\0\t\o\3\f\y\p\o\6\o\3\c\7\r\y\a\r\3\p\f\1\v\r\3\u\3\h\3\g\m\1\e\j\d\m\e\d\h\2\j\4\y\x\1\p\2\5\f\4\k\c\4\e\y\j\g\j\v\1\6\2\8\6\u\e\d\e\c\8\h\3\7\e\z\0\s\m\g\f\a\h\z\7\s\z\y\s\v\7\4\c\m\3\v\k\r\r\v\x\8\k\o\s\b\s\a\u\d\d\f\p\f\x\3\n\g\h\y\7\y\r\t\4\x\d\k\x\p\6\t\h\b\7\c\0\g\k\c\e\z\6\d\2\0\d\a\c\q\c\n\j\m\n\t\x\l\o\3\q\7\u\v\z\j\v\6\h\m\9\n\n\5\h\v\m\1\s\j\0\a\a\q\t\p\v\v\j\9\g\e\3\i\b\c\4\i\l\q\9\g\y\6\o\v\a\f\7\h\p\z\m\z\h\q\z\p\k\1\o\a\y\6\d\i\3\a\n\z\3\k\x\f\p\5\j\0\n\3\i\o\w\e\e\9\9\i\o\1\f\q\a\q\3\d\i\e\n\g\p\n\o\r\i\i\d\c\5\r\t\h\q\q\1\v\e\u\7\k\7\l\d\8\c\q\i\5\1\k\f\g\j\k\1\q\2\t\8\f\g\x\n\1\t\0\9\v\i\g\n\b\2\k\o\g\l\1\8\w\v\1\d\5\m\p\3\f\g\n\j\1\p\c\m\m\3\a\o\8\2\8\4\4\z\j\e\n\m\t\q\4\z\0\6\g\z\l\q\4\2\a\v\7\g\p\t\t\z\m\s\v\g\2\i\d\3\2\g\g\9\0\2\e\e\p\x\d\r\y\w\b\8\0\6\2\p\2\t\8\6\t\7\j\a\d\q\n\q\c\l\b\y\2\5\z\w\v\6\k\c\i\5\o\v\x\d\m\u\1\5\e\a\v\k\g\9\o\w\e\e\6\u\m\c\8\m\4\y\5\m ]] 00:38:48.460 00:38:48.460 real 0m4.961s 00:38:48.460 user 0m2.310s 00:38:48.460 sys 0m1.543s 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:38:48.460 00:38:48.460 real 0m23.655s 00:38:48.460 user 0m10.499s 00:38:48.460 sys 0m6.959s 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:48.460 12:20:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:38:48.460 ************************************ 00:38:48.460 END TEST spdk_dd_posix 00:38:48.460 ************************************ 00:38:48.460 12:20:47 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:38:48.460 12:20:47 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:48.460 12:20:47 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:48.460 12:20:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:48.460 ************************************ 00:38:48.460 START TEST spdk_dd_malloc 00:38:48.460 ************************************ 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:38:48.460 * Looking for test storage... 00:38:48.460 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:38:48.460 ************************************ 00:38:48.460 START TEST dd_malloc_copy 00:38:48.460 ************************************ 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1121 -- # malloc_copy 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:38:48.460 12:20:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:48.718 [2024-07-21 12:20:47.352983] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:48.718 [2024-07-21 12:20:47.353227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176191 ] 00:38:48.718 { 00:38:48.718 "subsystems": [ 00:38:48.718 { 00:38:48.718 "subsystem": "bdev", 00:38:48.718 "config": [ 00:38:48.718 { 00:38:48.718 "params": { 00:38:48.718 "block_size": 512, 00:38:48.718 "num_blocks": 1048576, 00:38:48.718 "name": "malloc0" 00:38:48.718 }, 00:38:48.718 "method": "bdev_malloc_create" 00:38:48.718 }, 00:38:48.718 { 00:38:48.718 "params": { 00:38:48.718 "block_size": 512, 00:38:48.718 "num_blocks": 1048576, 00:38:48.718 "name": "malloc1" 00:38:48.718 }, 00:38:48.718 "method": "bdev_malloc_create" 00:38:48.718 }, 00:38:48.718 { 00:38:48.718 "method": "bdev_wait_for_examine" 00:38:48.718 } 00:38:48.718 ] 00:38:48.718 } 00:38:48.718 ] 00:38:48.718 } 00:38:48.718 [2024-07-21 12:20:47.518855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.718 [2024-07-21 12:20:47.575781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:52.403  Copying: 217/512 [MB] (217 MBps) Copying: 435/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 217 MBps) 00:38:52.403 00:38:52.403 12:20:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:38:52.403 12:20:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:38:52.403 12:20:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:38:52.403 12:20:50 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:52.403 [2024-07-21 12:20:50.984866] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:52.403 [2024-07-21 12:20:50.985130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176241 ] 00:38:52.403 { 00:38:52.403 "subsystems": [ 00:38:52.403 { 00:38:52.403 "subsystem": "bdev", 00:38:52.403 "config": [ 00:38:52.403 { 00:38:52.403 "params": { 00:38:52.403 "block_size": 512, 00:38:52.403 "num_blocks": 1048576, 00:38:52.403 "name": "malloc0" 00:38:52.403 }, 00:38:52.403 "method": "bdev_malloc_create" 00:38:52.403 }, 00:38:52.403 { 00:38:52.403 "params": { 00:38:52.403 "block_size": 512, 00:38:52.403 "num_blocks": 1048576, 00:38:52.403 "name": "malloc1" 00:38:52.403 }, 00:38:52.403 "method": "bdev_malloc_create" 00:38:52.403 }, 00:38:52.403 { 00:38:52.403 "method": "bdev_wait_for_examine" 00:38:52.403 } 00:38:52.403 ] 00:38:52.403 } 00:38:52.403 ] 00:38:52.403 } 00:38:52.403 [2024-07-21 12:20:51.150696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:52.403 [2024-07-21 12:20:51.216268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.083  Copying: 214/512 [MB] (214 MBps) Copying: 426/512 [MB] (212 MBps) Copying: 512/512 [MB] (average 213 MBps) 00:38:56.083 00:38:56.083 00:38:56.083 real 0m7.344s 00:38:56.083 user 0m6.266s 00:38:56.083 sys 0m0.948s 00:38:56.083 12:20:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:56.083 12:20:54 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:38:56.083 ************************************ 00:38:56.083 END TEST dd_malloc_copy 00:38:56.083 ************************************ 00:38:56.083 00:38:56.083 real 0m7.481s 00:38:56.083 user 0m6.347s 00:38:56.083 sys 0m1.008s 00:38:56.083 12:20:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:56.083 12:20:54 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:38:56.083 ************************************ 00:38:56.083 END TEST spdk_dd_malloc 00:38:56.083 ************************************ 00:38:56.083 12:20:54 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:38:56.083 12:20:54 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:38:56.083 12:20:54 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:56.083 12:20:54 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:38:56.083 ************************************ 00:38:56.083 START TEST spdk_dd_bdev_to_bdev 00:38:56.083 ************************************ 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 00:38:56.083 * Looking for test storage... 00:38:56.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:38:56.083 12:20:54 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:38:56.083 [2024-07-21 12:20:54.874770] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:56.083 [2024-07-21 12:20:54.875508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176356 ] 00:38:56.340 [2024-07-21 12:20:55.041685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:56.340 [2024-07-21 12:20:55.098730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:56.855  Copying: 256/256 [MB] (average 1326 MBps) 00:38:56.855 00:38:56.855 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:38:56.855 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:38:56.856 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:38:56.856 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:38:56.856 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:38:56.856 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:38:56.856 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:56.856 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:56.856 ************************************ 00:38:56.856 START TEST dd_inflate_file 00:38:56.856 ************************************ 00:38:56.856 12:20:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:38:57.113 [2024-07-21 12:20:55.730401] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:57.113 [2024-07-21 12:20:55.730663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176373 ] 00:38:57.113 [2024-07-21 12:20:55.894946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.113 [2024-07-21 12:20:55.958342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:38:57.629  Copying: 64/64 [MB] (average 1207 MBps) 00:38:57.629 00:38:57.629 00:38:57.629 real 0m0.708s 00:38:57.629 user 0m0.312s 00:38:57.629 sys 0m0.257s 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:38:57.629 ************************************ 00:38:57.629 END TEST dd_inflate_file 00:38:57.629 ************************************ 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:38:57.629 ************************************ 00:38:57.629 START TEST dd_copy_to_out_bdev 00:38:57.629 ************************************ 00:38:57.629 12:20:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:38:57.887 [2024-07-21 12:20:56.498947] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:38:57.887 [2024-07-21 12:20:56.499179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176411 ] 00:38:57.887 { 00:38:57.887 "subsystems": [ 00:38:57.887 { 00:38:57.887 "subsystem": "bdev", 00:38:57.887 "config": [ 00:38:57.887 { 00:38:57.887 "params": { 00:38:57.887 "block_size": 4096, 00:38:57.887 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:38:57.887 "name": "aio1" 00:38:57.887 }, 00:38:57.887 "method": "bdev_aio_create" 00:38:57.887 }, 00:38:57.887 { 00:38:57.887 "params": { 00:38:57.887 "trtype": "pcie", 00:38:57.887 "traddr": "0000:00:10.0", 00:38:57.887 "name": "Nvme0" 00:38:57.887 }, 00:38:57.887 "method": "bdev_nvme_attach_controller" 00:38:57.887 }, 00:38:57.887 { 00:38:57.887 "method": "bdev_wait_for_examine" 00:38:57.887 } 00:38:57.887 ] 00:38:57.887 } 00:38:57.887 ] 00:38:57.887 } 00:38:57.887 [2024-07-21 12:20:56.662407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.887 [2024-07-21 12:20:56.718003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:00.088  Copying: 43/64 [MB] (43 MBps) Copying: 64/64 [MB] (average 43 MBps) 00:39:00.088 00:39:00.088 00:39:00.088 real 0m2.290s 00:39:00.088 user 0m1.884s 00:39:00.088 sys 0m0.298s 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:00.089 ************************************ 00:39:00.089 END TEST dd_copy_to_out_bdev 00:39:00.089 ************************************ 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:00.089 ************************************ 00:39:00.089 START TEST dd_offset_magic 00:39:00.089 ************************************ 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1121 -- # offset_magic 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:00.089 12:20:58 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:00.089 [2024-07-21 12:20:58.844852] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:00.089 [2024-07-21 12:20:58.845093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176464 ] 00:39:00.089 { 00:39:00.089 "subsystems": [ 00:39:00.089 { 00:39:00.089 "subsystem": "bdev", 00:39:00.089 "config": [ 00:39:00.089 { 00:39:00.089 "params": { 00:39:00.089 "block_size": 4096, 00:39:00.089 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:00.089 "name": "aio1" 00:39:00.089 }, 00:39:00.089 "method": "bdev_aio_create" 00:39:00.089 }, 00:39:00.089 { 00:39:00.089 "params": { 00:39:00.089 "trtype": "pcie", 00:39:00.089 "traddr": "0000:00:10.0", 00:39:00.089 "name": "Nvme0" 00:39:00.089 }, 00:39:00.089 "method": "bdev_nvme_attach_controller" 00:39:00.089 }, 00:39:00.089 { 00:39:00.089 "method": "bdev_wait_for_examine" 00:39:00.089 } 00:39:00.089 ] 00:39:00.089 } 00:39:00.089 ] 00:39:00.089 } 00:39:00.346 [2024-07-21 12:20:59.011010] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:00.346 [2024-07-21 12:20:59.098661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.540  Copying: 65/65 [MB] (average 117 MBps) 00:39:01.540 00:39:01.540 12:21:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:39:01.540 12:21:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:01.540 12:21:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:01.540 12:21:00 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:01.540 [2024-07-21 12:21:00.225222] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:01.540 [2024-07-21 12:21:00.225487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176490 ] 00:39:01.540 { 00:39:01.540 "subsystems": [ 00:39:01.540 { 00:39:01.540 "subsystem": "bdev", 00:39:01.540 "config": [ 00:39:01.540 { 00:39:01.540 "params": { 00:39:01.540 "block_size": 4096, 00:39:01.540 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:01.540 "name": "aio1" 00:39:01.540 }, 00:39:01.540 "method": "bdev_aio_create" 00:39:01.540 }, 00:39:01.540 { 00:39:01.540 "params": { 00:39:01.540 "trtype": "pcie", 00:39:01.540 "traddr": "0000:00:10.0", 00:39:01.540 "name": "Nvme0" 00:39:01.540 }, 00:39:01.540 "method": "bdev_nvme_attach_controller" 00:39:01.540 }, 00:39:01.540 { 00:39:01.540 "method": "bdev_wait_for_examine" 00:39:01.540 } 00:39:01.540 ] 00:39:01.540 } 00:39:01.540 ] 00:39:01.540 } 00:39:01.540 [2024-07-21 12:21:00.392442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.799 [2024-07-21 12:21:00.490253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:02.315  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:02.316 00:39:02.316 12:21:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:02.316 12:21:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:02.316 12:21:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:39:02.316 12:21:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:39:02.316 12:21:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:39:02.316 12:21:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:02.316 12:21:01 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:02.316 [2024-07-21 12:21:01.106981] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:02.316 [2024-07-21 12:21:01.107702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176512 ] 00:39:02.316 { 00:39:02.316 "subsystems": [ 00:39:02.316 { 00:39:02.316 "subsystem": "bdev", 00:39:02.316 "config": [ 00:39:02.316 { 00:39:02.316 "params": { 00:39:02.316 "block_size": 4096, 00:39:02.316 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:02.316 "name": "aio1" 00:39:02.316 }, 00:39:02.316 "method": "bdev_aio_create" 00:39:02.316 }, 00:39:02.316 { 00:39:02.316 "params": { 00:39:02.316 "trtype": "pcie", 00:39:02.316 "traddr": "0000:00:10.0", 00:39:02.316 "name": "Nvme0" 00:39:02.316 }, 00:39:02.316 "method": "bdev_nvme_attach_controller" 00:39:02.316 }, 00:39:02.316 { 00:39:02.316 "method": "bdev_wait_for_examine" 00:39:02.316 } 00:39:02.316 ] 00:39:02.316 } 00:39:02.316 ] 00:39:02.316 } 00:39:02.574 [2024-07-21 12:21:01.275574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:02.574 [2024-07-21 12:21:01.349911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:03.731  Copying: 65/65 [MB] (average 169 MBps) 00:39:03.731 00:39:03.731 12:21:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:39:03.731 12:21:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:39:03.731 12:21:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:39:03.731 12:21:02 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:03.731 [2024-07-21 12:21:02.358744] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:03.731 [2024-07-21 12:21:02.358973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176533 ] 00:39:03.731 { 00:39:03.731 "subsystems": [ 00:39:03.731 { 00:39:03.731 "subsystem": "bdev", 00:39:03.731 "config": [ 00:39:03.731 { 00:39:03.731 "params": { 00:39:03.731 "block_size": 4096, 00:39:03.731 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:03.731 "name": "aio1" 00:39:03.731 }, 00:39:03.731 "method": "bdev_aio_create" 00:39:03.731 }, 00:39:03.731 { 00:39:03.731 "params": { 00:39:03.731 "trtype": "pcie", 00:39:03.731 "traddr": "0000:00:10.0", 00:39:03.731 "name": "Nvme0" 00:39:03.731 }, 00:39:03.731 "method": "bdev_nvme_attach_controller" 00:39:03.731 }, 00:39:03.731 { 00:39:03.731 "method": "bdev_wait_for_examine" 00:39:03.731 } 00:39:03.731 ] 00:39:03.731 } 00:39:03.731 ] 00:39:03.731 } 00:39:03.731 [2024-07-21 12:21:02.524476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.003 [2024-07-21 12:21:02.601102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:04.573  Copying: 1024/1024 [kB] (average 500 MBps) 00:39:04.573 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:39:04.573 00:39:04.573 real 0m4.404s 00:39:04.573 user 0m2.101s 00:39:04.573 sys 0m1.078s 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:39:04.573 ************************************ 00:39:04.573 END TEST dd_offset_magic 00:39:04.573 ************************************ 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:04.573 12:21:03 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:04.573 [2024-07-21 12:21:03.294298] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:04.573 [2024-07-21 12:21:03.294562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176566 ] 00:39:04.573 { 00:39:04.573 "subsystems": [ 00:39:04.573 { 00:39:04.573 "subsystem": "bdev", 00:39:04.573 "config": [ 00:39:04.573 { 00:39:04.573 "params": { 00:39:04.573 "block_size": 4096, 00:39:04.573 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:04.573 "name": "aio1" 00:39:04.573 }, 00:39:04.573 "method": "bdev_aio_create" 00:39:04.573 }, 00:39:04.573 { 00:39:04.573 "params": { 00:39:04.573 "trtype": "pcie", 00:39:04.573 "traddr": "0000:00:10.0", 00:39:04.573 "name": "Nvme0" 00:39:04.573 }, 00:39:04.573 "method": "bdev_nvme_attach_controller" 00:39:04.573 }, 00:39:04.573 { 00:39:04.573 "method": "bdev_wait_for_examine" 00:39:04.573 } 00:39:04.573 ] 00:39:04.573 } 00:39:04.573 ] 00:39:04.573 } 00:39:04.830 [2024-07-21 12:21:03.458209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.830 [2024-07-21 12:21:03.529887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.345  Copying: 5120/5120 [kB] (average 1250 MBps) 00:39:05.345 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:05.345 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:05.345 [2024-07-21 12:21:04.098389] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:05.345 [2024-07-21 12:21:04.099233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176590 ] 00:39:05.345 { 00:39:05.345 "subsystems": [ 00:39:05.345 { 00:39:05.345 "subsystem": "bdev", 00:39:05.345 "config": [ 00:39:05.345 { 00:39:05.345 "params": { 00:39:05.345 "block_size": 4096, 00:39:05.345 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:39:05.345 "name": "aio1" 00:39:05.345 }, 00:39:05.345 "method": "bdev_aio_create" 00:39:05.345 }, 00:39:05.345 { 00:39:05.345 "params": { 00:39:05.345 "trtype": "pcie", 00:39:05.345 "traddr": "0000:00:10.0", 00:39:05.345 "name": "Nvme0" 00:39:05.345 }, 00:39:05.345 "method": "bdev_nvme_attach_controller" 00:39:05.345 }, 00:39:05.345 { 00:39:05.345 "method": "bdev_wait_for_examine" 00:39:05.345 } 00:39:05.345 ] 00:39:05.345 } 00:39:05.345 ] 00:39:05.345 } 00:39:05.602 [2024-07-21 12:21:04.267879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.602 [2024-07-21 12:21:04.334714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:06.116  Copying: 5120/5120 [kB] (average 250 MBps) 00:39:06.116 00:39:06.116 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:39:06.116 00:39:06.116 real 0m10.195s 00:39:06.116 user 0m5.659s 00:39:06.116 sys 0m2.663s 00:39:06.116 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:06.116 12:21:04 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:06.116 ************************************ 00:39:06.116 END TEST spdk_dd_bdev_to_bdev 00:39:06.116 ************************************ 00:39:06.116 12:21:04 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:39:06.116 12:21:04 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:06.116 12:21:04 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:06.116 12:21:04 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:06.116 12:21:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:06.374 ************************************ 00:39:06.374 START TEST spdk_dd_sparse 00:39:06.374 ************************************ 00:39:06.374 12:21:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:39:06.374 * Looking for test storage... 00:39:06.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:06.374 12:21:05 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:06.374 12:21:05 spdk_dd.spdk_dd_sparse -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.374 12:21:05 spdk_dd.spdk_dd_sparse -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.374 12:21:05 spdk_dd.spdk_dd_sparse -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.374 12:21:05 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:39:06.375 1+0 records in 00:39:06.375 1+0 records out 00:39:06.375 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00932106 s, 450 MB/s 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:39:06.375 1+0 records in 00:39:06.375 1+0 records out 00:39:06.375 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0107758 s, 389 MB/s 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:39:06.375 1+0 records in 00:39:06.375 1+0 records out 00:39:06.375 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00891793 s, 470 MB/s 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:06.375 ************************************ 00:39:06.375 START TEST dd_sparse_file_to_file 00:39:06.375 ************************************ 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1121 -- # file_to_file 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:06.375 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:06.375 { 00:39:06.375 "subsystems": [ 00:39:06.375 { 00:39:06.375 "subsystem": "bdev", 00:39:06.375 "config": [ 00:39:06.375 { 00:39:06.375 "params": { 00:39:06.375 "block_size": 4096, 00:39:06.375 "filename": "dd_sparse_aio_disk", 00:39:06.375 "name": "dd_aio" 00:39:06.375 }, 00:39:06.375 "method": "bdev_aio_create" 00:39:06.375 }, 00:39:06.375 { 00:39:06.375 "params": { 00:39:06.375 "lvs_name": "dd_lvstore", 00:39:06.375 "bdev_name": "dd_aio" 00:39:06.375 }, 00:39:06.375 "method": "bdev_lvol_create_lvstore" 00:39:06.375 }, 00:39:06.375 { 00:39:06.375 "method": "bdev_wait_for_examine" 00:39:06.375 } 00:39:06.375 ] 00:39:06.375 } 00:39:06.375 ] 00:39:06.375 } 00:39:06.375 [2024-07-21 12:21:05.174714] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:06.375 [2024-07-21 12:21:05.175601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176669 ] 00:39:06.633 [2024-07-21 12:21:05.342784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:06.633 [2024-07-21 12:21:05.396859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.149  Copying: 12/36 [MB] (average 1090 MBps) 00:39:07.149 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:07.149 00:39:07.149 real 0m0.790s 00:39:07.149 user 0m0.422s 00:39:07.149 sys 0m0.227s 00:39:07.149 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:07.150 ************************************ 00:39:07.150 END TEST dd_sparse_file_to_file 00:39:07.150 ************************************ 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:07.150 ************************************ 00:39:07.150 START TEST dd_sparse_file_to_bdev 00:39:07.150 ************************************ 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1121 -- # file_to_bdev 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:39:07.150 12:21:05 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:07.408 [2024-07-21 12:21:06.017002] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:07.408 [2024-07-21 12:21:06.017258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176716 ] 00:39:07.408 { 00:39:07.408 "subsystems": [ 00:39:07.408 { 00:39:07.408 "subsystem": "bdev", 00:39:07.408 "config": [ 00:39:07.408 { 00:39:07.408 "params": { 00:39:07.408 "block_size": 4096, 00:39:07.408 "filename": "dd_sparse_aio_disk", 00:39:07.408 "name": "dd_aio" 00:39:07.408 }, 00:39:07.408 "method": "bdev_aio_create" 00:39:07.408 }, 00:39:07.408 { 00:39:07.408 "params": { 00:39:07.408 "lvs_name": "dd_lvstore", 00:39:07.408 "lvol_name": "dd_lvol", 00:39:07.408 "size_in_mib": 36, 00:39:07.408 "thin_provision": true 00:39:07.408 }, 00:39:07.408 "method": "bdev_lvol_create" 00:39:07.408 }, 00:39:07.408 { 00:39:07.408 "method": "bdev_wait_for_examine" 00:39:07.408 } 00:39:07.408 ] 00:39:07.408 } 00:39:07.408 ] 00:39:07.408 } 00:39:07.408 [2024-07-21 12:21:06.183581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:07.408 [2024-07-21 12:21:06.250694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:07.924  Copying: 12/36 [MB] (average 500 MBps) 00:39:07.924 00:39:07.924 00:39:07.924 real 0m0.779s 00:39:07.924 user 0m0.425s 00:39:07.924 sys 0m0.245s 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:07.924 ************************************ 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:39:07.924 END TEST dd_sparse_file_to_bdev 00:39:07.924 ************************************ 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:07.924 ************************************ 00:39:07.924 START TEST dd_sparse_bdev_to_file 00:39:07.924 ************************************ 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1121 -- # bdev_to_file 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:39:07.924 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:39:08.182 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:39:08.182 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:39:08.182 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:39:08.182 12:21:06 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.182 [2024-07-21 12:21:06.841996] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:08.182 [2024-07-21 12:21:06.842209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176761 ] 00:39:08.182 { 00:39:08.182 "subsystems": [ 00:39:08.182 { 00:39:08.182 "subsystem": "bdev", 00:39:08.182 "config": [ 00:39:08.182 { 00:39:08.182 "params": { 00:39:08.182 "block_size": 4096, 00:39:08.182 "filename": "dd_sparse_aio_disk", 00:39:08.182 "name": "dd_aio" 00:39:08.182 }, 00:39:08.182 "method": "bdev_aio_create" 00:39:08.182 }, 00:39:08.182 { 00:39:08.182 "method": "bdev_wait_for_examine" 00:39:08.182 } 00:39:08.182 ] 00:39:08.182 } 00:39:08.182 ] 00:39:08.182 } 00:39:08.182 [2024-07-21 12:21:07.007896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.441 [2024-07-21 12:21:07.069785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.699  Copying: 12/36 [MB] (average 1090 MBps) 00:39:08.699 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:39:08.699 00:39:08.699 real 0m0.749s 00:39:08.699 user 0m0.444s 00:39:08.699 sys 0m0.202s 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:08.699 12:21:07 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:39:08.699 ************************************ 00:39:08.699 END TEST dd_sparse_bdev_to_file 00:39:08.699 ************************************ 00:39:08.959 12:21:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:39:08.959 12:21:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:39:08.959 12:21:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:39:08.959 12:21:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:39:08.959 12:21:07 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:39:08.959 00:39:08.959 real 0m2.616s 00:39:08.959 user 0m1.433s 00:39:08.959 sys 0m0.823s 00:39:08.959 12:21:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:08.959 12:21:07 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:39:08.959 ************************************ 00:39:08.959 END TEST spdk_dd_sparse 00:39:08.959 ************************************ 00:39:08.959 12:21:07 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:08.959 12:21:07 spdk_dd -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:08.959 12:21:07 spdk_dd -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:08.959 12:21:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:08.959 ************************************ 00:39:08.959 START TEST spdk_dd_negative 00:39:08.959 ************************************ 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:39:08.959 * Looking for test storage... 00:39:08.959 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:08.959 ************************************ 00:39:08.959 START TEST dd_invalid_arguments 00:39:08.959 ************************************ 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1121 -- # invalid_arguments 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@648 -- # local es=0 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:08.959 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:39:08.959 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:39:08.959 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:39:08.959 00:39:08.959 CPU options: 00:39:08.959 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:39:08.959 (like [0,1,10]) 00:39:08.959 --lcores lcore to CPU mapping list. The list is in the format: 00:39:08.959 [<,lcores[@CPUs]>...] 00:39:08.959 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:39:08.959 Within the group, '-' is used for range separator, 00:39:08.959 ',' is used for single number separator. 00:39:08.959 '( )' can be omitted for single element group, 00:39:08.959 '@' can be omitted if cpus and lcores have the same value 00:39:08.959 --disable-cpumask-locks Disable CPU core lock files. 00:39:08.959 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:39:08.959 pollers in the app support interrupt mode) 00:39:08.959 -p, --main-core main (primary) core for DPDK 00:39:08.959 00:39:08.960 Configuration options: 00:39:08.960 -c, --config, --json JSON config file 00:39:08.960 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:39:08.960 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:39:08.960 --wait-for-rpc wait for RPCs to initialize subsystems 00:39:08.960 --rpcs-allowed comma-separated list of permitted RPCS 00:39:08.960 --json-ignore-init-errors don't exit on invalid config entry 00:39:08.960 00:39:08.960 Memory options: 00:39:08.960 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:39:08.960 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:39:08.960 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:39:08.960 -R, --huge-unlink unlink huge files after initialization 00:39:08.960 -n, --mem-channels number of memory channels used for DPDK 00:39:08.960 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:39:08.960 --msg-mempool-size global message memory pool size in count (default: 262143) 00:39:08.960 --no-huge run without using hugepages 00:39:08.960 -i, --shm-id shared memory ID (optional) 00:39:08.960 -g, --single-file-segments force creating just one hugetlbfs file 00:39:08.960 00:39:08.960 PCI options: 00:39:08.960 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:39:08.960 -B, --pci-blocked pci addr to block (can be used more than once) 00:39:08.960 -u, --no-pci disable PCI access 00:39:08.960 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:39:08.960 00:39:08.960 Log options: 00:39:08.960 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:39:08.960 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:39:08.960 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, 00:39:08.960 bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, 00:39:08.960 blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:39:08.960 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:39:08.960 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:39:08.960 sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, 00:39:08.960 vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, 00:39:08.960 vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:39:08.960 virtio_vfio_user, vmd) 00:39:08.960 --silence-noticelog disable notice level logging to stderr 00:39:08.960 00:39:08.960 Trace options: 00:39:08.960 --num-trace-entries number of trace entries for each core, must be power of 2, 00:39:08.960 setting 0 to disable trace (default 32768) 00:39:08.960 Tracepoints vary in size and can use more than one trace entry. 00:39:08.960 -e, --tpoint-group [:] 00:39:08.960 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:39:08.960 [2024-07-21 12:21:07.819859] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:39:09.219 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:39:09.219 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:39:09.219 a tracepoint group. First tpoint inside a group can be enabled by 00:39:09.219 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:39:09.219 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:39:09.219 in /include/spdk_internal/trace_defs.h 00:39:09.219 00:39:09.219 Other options: 00:39:09.219 -h, --help show this usage 00:39:09.219 -v, --version print SPDK version 00:39:09.219 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:39:09.219 --env-context Opaque context for use of the env implementation 00:39:09.219 00:39:09.219 Application specific: 00:39:09.219 [--------- DD Options ---------] 00:39:09.219 --if Input file. Must specify either --if or --ib. 00:39:09.219 --ib Input bdev. Must specifier either --if or --ib 00:39:09.219 --of Output file. Must specify either --of or --ob. 00:39:09.219 --ob Output bdev. Must specify either --of or --ob. 00:39:09.219 --iflag Input file flags. 00:39:09.219 --oflag Output file flags. 00:39:09.219 --bs I/O unit size (default: 4096) 00:39:09.219 --qd Queue depth (default: 2) 00:39:09.219 --count I/O unit count. The number of I/O units to copy. (default: all) 00:39:09.219 --skip Skip this many I/O units at start of input. (default: 0) 00:39:09.219 --seek Skip this many I/O units at start of output. (default: 0) 00:39:09.219 --aio Force usage of AIO. (by default io_uring is used if available) 00:39:09.219 --sparse Enable hole skipping in input target 00:39:09.219 Available iflag and oflag values: 00:39:09.219 append - append mode 00:39:09.219 direct - use direct I/O for data 00:39:09.219 directory - fail unless a directory 00:39:09.219 dsync - use synchronized I/O for data 00:39:09.219 noatime - do not update access time 00:39:09.219 noctty - do not assign controlling terminal from file 00:39:09.219 nofollow - do not follow symlinks 00:39:09.219 nonblock - use non-blocking I/O 00:39:09.219 sync - use synchronized I/O for data and metadata 00:39:09.219 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@651 -- # es=2 00:39:09.219 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.219 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.219 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.219 00:39:09.219 real 0m0.100s 00:39:09.219 user 0m0.044s 00:39:09.219 sys 0m0.056s 00:39:09.219 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:09.219 12:21:07 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:39:09.220 ************************************ 00:39:09.220 END TEST dd_invalid_arguments 00:39:09.220 ************************************ 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:09.220 ************************************ 00:39:09.220 START TEST dd_double_input 00:39:09.220 ************************************ 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1121 -- # double_input 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@648 -- # local es=0 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:09.220 12:21:07 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:39:09.220 [2024-07-21 12:21:07.966252] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@651 -- # es=22 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.220 00:39:09.220 real 0m0.095s 00:39:09.220 user 0m0.057s 00:39:09.220 sys 0m0.039s 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:39:09.220 ************************************ 00:39:09.220 END TEST dd_double_input 00:39:09.220 ************************************ 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:09.220 ************************************ 00:39:09.220 START TEST dd_double_output 00:39:09.220 ************************************ 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1121 -- # double_output 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@648 -- # local es=0 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:09.220 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:39:09.479 [2024-07-21 12:21:08.114087] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@651 -- # es=22 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.479 00:39:09.479 real 0m0.103s 00:39:09.479 user 0m0.062s 00:39:09.479 sys 0m0.042s 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:39:09.479 ************************************ 00:39:09.479 END TEST dd_double_output 00:39:09.479 ************************************ 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:09.479 ************************************ 00:39:09.479 START TEST dd_no_input 00:39:09.479 ************************************ 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1121 -- # no_input 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@648 -- # local es=0 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:39:09.479 [2024-07-21 12:21:08.266618] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@651 -- # es=22 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.479 00:39:09.479 real 0m0.100s 00:39:09.479 user 0m0.052s 00:39:09.479 sys 0m0.048s 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:09.479 12:21:08 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:39:09.479 ************************************ 00:39:09.479 END TEST dd_no_input 00:39:09.479 ************************************ 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:09.738 ************************************ 00:39:09.738 START TEST dd_no_output 00:39:09.738 ************************************ 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1121 -- # no_output 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@648 -- # local es=0 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:39:09.738 [2024-07-21 12:21:08.423572] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@651 -- # es=22 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.738 00:39:09.738 real 0m0.101s 00:39:09.738 user 0m0.061s 00:39:09.738 sys 0m0.039s 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:39:09.738 ************************************ 00:39:09.738 END TEST dd_no_output 00:39:09.738 ************************************ 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:09.738 ************************************ 00:39:09.738 START TEST dd_wrong_blocksize 00:39:09.738 ************************************ 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1121 -- # wrong_blocksize 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.738 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.739 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.739 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.739 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.739 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.739 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.739 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:09.739 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:39:09.739 [2024-07-21 12:21:08.584076] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@651 -- # es=22 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:09.998 00:39:09.998 real 0m0.102s 00:39:09.998 user 0m0.051s 00:39:09.998 sys 0m0.053s 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:09.998 ************************************ 00:39:09.998 END TEST dd_wrong_blocksize 00:39:09.998 ************************************ 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:09.998 ************************************ 00:39:09.998 START TEST dd_smaller_blocksize 00:39:09.998 ************************************ 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1121 -- # smaller_blocksize 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@648 -- # local es=0 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:09.998 12:21:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:39:09.998 [2024-07-21 12:21:08.748300] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:09.998 [2024-07-21 12:21:08.748531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177008 ] 00:39:10.256 [2024-07-21 12:21:08.919450] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:10.256 [2024-07-21 12:21:08.993083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:10.515 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:39:10.515 [2024-07-21 12:21:09.171935] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:39:10.515 [2024-07-21 12:21:09.172046] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:10.515 [2024-07-21 12:21:09.295001] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:10.773 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@651 -- # es=244 00:39:10.773 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@660 -- # es=116 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # case "$es" in 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@668 -- # es=1 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:10.774 00:39:10.774 real 0m0.720s 00:39:10.774 user 0m0.330s 00:39:10.774 sys 0m0.287s 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:39:10.774 ************************************ 00:39:10.774 END TEST dd_smaller_blocksize 00:39:10.774 ************************************ 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:10.774 ************************************ 00:39:10.774 START TEST dd_invalid_count 00:39:10.774 ************************************ 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1121 -- # invalid_count 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@648 -- # local es=0 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:39:10.774 [2024-07-21 12:21:09.500351] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@651 -- # es=22 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:10.774 00:39:10.774 real 0m0.081s 00:39:10.774 user 0m0.039s 00:39:10.774 sys 0m0.043s 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:39:10.774 ************************************ 00:39:10.774 END TEST dd_invalid_count 00:39:10.774 ************************************ 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:10.774 ************************************ 00:39:10.774 START TEST dd_invalid_oflag 00:39:10.774 ************************************ 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1121 -- # invalid_oflag 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@648 -- # local es=0 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:10.774 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:39:11.033 [2024-07-21 12:21:09.643493] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@651 -- # es=22 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:11.033 00:39:11.033 real 0m0.097s 00:39:11.033 user 0m0.071s 00:39:11.033 sys 0m0.026s 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:39:11.033 ************************************ 00:39:11.033 END TEST dd_invalid_oflag 00:39:11.033 ************************************ 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:11.033 ************************************ 00:39:11.033 START TEST dd_invalid_iflag 00:39:11.033 ************************************ 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1121 -- # invalid_iflag 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@648 -- # local es=0 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:39:11.033 [2024-07-21 12:21:09.797718] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@651 -- # es=22 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:11.033 00:39:11.033 real 0m0.107s 00:39:11.033 user 0m0.056s 00:39:11.033 sys 0m0.048s 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:39:11.033 ************************************ 00:39:11.033 END TEST dd_invalid_iflag 00:39:11.033 ************************************ 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:11.033 12:21:09 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:11.291 ************************************ 00:39:11.291 START TEST dd_unknown_flag 00:39:11.291 ************************************ 00:39:11.291 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1121 -- # unknown_flag 00:39:11.291 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@648 -- # local es=0 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:11.292 12:21:09 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:39:11.292 [2024-07-21 12:21:09.954316] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:11.292 [2024-07-21 12:21:09.954747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177133 ] 00:39:11.292 [2024-07-21 12:21:10.103945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:11.550 [2024-07-21 12:21:10.160120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:11.550 [2024-07-21 12:21:10.240109] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:39:11.550 [2024-07-21 12:21:10.240529] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:11.550  Copying: 0/0 [B] (average 0 Bps)[2024-07-21 12:21:10.240827] app.c:1040:app_stop: *NOTICE*: spdk_app_stop called twice 00:39:11.550 [2024-07-21 12:21:10.356704] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:11.808 00:39:11.808 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@651 -- # es=234 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@660 -- # es=106 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # case "$es" in 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@668 -- # es=1 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:11.808 00:39:11.808 real 0m0.610s 00:39:11.808 user 0m0.281s 00:39:11.808 sys 0m0.187s 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:39:11.808 ************************************ 00:39:11.808 END TEST dd_unknown_flag 00:39:11.808 ************************************ 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:11.808 ************************************ 00:39:11.808 START TEST dd_invalid_json 00:39:11.808 ************************************ 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1121 -- # invalid_json 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@648 -- # local es=0 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@95 -- # : 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.808 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.809 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.809 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:39:11.809 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:11.809 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:39:11.809 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:39:11.809 [2024-07-21 12:21:10.618624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:11.809 [2024-07-21 12:21:10.619032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177159 ] 00:39:12.067 [2024-07-21 12:21:10.768303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.067 [2024-07-21 12:21:10.834213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:12.067 [2024-07-21 12:21:10.834623] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:39:12.067 [2024-07-21 12:21:10.834787] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:12.067 [2024-07-21 12:21:10.834933] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:12.067 [2024-07-21 12:21:10.835095] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:39:12.325 ************************************ 00:39:12.325 END TEST dd_invalid_json 00:39:12.325 ************************************ 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@651 -- # es=234 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@660 -- # es=106 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # case "$es" in 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@668 -- # es=1 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:39:12.325 00:39:12.325 real 0m0.375s 00:39:12.325 user 0m0.170s 00:39:12.325 sys 0m0.105s 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:39:12.325 ************************************ 00:39:12.325 END TEST spdk_dd_negative 00:39:12.325 ************************************ 00:39:12.325 00:39:12.325 real 0m3.322s 00:39:12.325 user 0m1.672s 00:39:12.325 sys 0m1.278s 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:12.325 12:21:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:39:12.325 ************************************ 00:39:12.325 END TEST spdk_dd 00:39:12.325 ************************************ 00:39:12.325 00:39:12.325 real 1m8.683s 00:39:12.325 user 0m38.583s 00:39:12.325 sys 0m19.368s 00:39:12.325 12:21:11 spdk_dd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:12.325 12:21:11 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:39:12.325 12:21:11 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:39:12.325 12:21:11 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:39:12.325 12:21:11 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:12.325 12:21:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:12.325 12:21:11 -- common/autotest_common.sh@10 -- # set +x 00:39:12.325 ************************************ 00:39:12.325 START TEST blockdev_nvme 00:39:12.325 ************************************ 00:39:12.325 12:21:11 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:39:12.325 * Looking for test storage... 00:39:12.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:12.325 12:21:11 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=177256 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:12.325 12:21:11 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 177256 00:39:12.325 12:21:11 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 177256 ']' 00:39:12.325 12:21:11 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:12.325 12:21:11 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:12.325 12:21:11 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:12.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:12.326 12:21:11 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:12.326 12:21:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:12.326 12:21:11 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:12.583 [2024-07-21 12:21:11.243547] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:12.583 [2024-07-21 12:21:11.244199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177256 ] 00:39:12.583 [2024-07-21 12:21:11.411460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:12.841 [2024-07-21 12:21:11.479630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:13.407 12:21:12 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:13.407 12:21:12 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:39:13.407 12:21:12 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:39:13.407 12:21:12 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:39:13.407 12:21:12 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:39:13.407 12:21:12 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:39:13.407 12:21:12 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:13.666 12:21:12 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0c243316-f4c4-4815-b8fd-7ae3a573631a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0c243316-f4c4-4815-b8fd-7ae3a573631a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:39:13.666 12:21:12 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:39:13.926 12:21:12 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:39:13.926 12:21:12 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:39:13.926 12:21:12 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:39:13.926 12:21:12 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 177256 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 177256 ']' 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 177256 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177256 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177256' 00:39:13.926 killing process with pid 177256 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 177256 00:39:13.926 12:21:12 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 177256 00:39:14.185 12:21:13 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:14.185 12:21:13 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:39:14.185 12:21:13 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:39:14.185 12:21:13 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:14.185 12:21:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:14.443 ************************************ 00:39:14.443 START TEST bdev_hello_world 00:39:14.443 ************************************ 00:39:14.443 12:21:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:39:14.443 [2024-07-21 12:21:13.122840] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:14.443 [2024-07-21 12:21:13.123086] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177322 ] 00:39:14.443 [2024-07-21 12:21:13.293219] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:14.700 [2024-07-21 12:21:13.384030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:14.957 [2024-07-21 12:21:13.633956] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:14.957 [2024-07-21 12:21:13.634029] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:39:14.957 [2024-07-21 12:21:13.634083] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:14.957 [2024-07-21 12:21:13.636369] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:14.957 [2024-07-21 12:21:13.636937] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:14.957 [2024-07-21 12:21:13.637024] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:14.957 [2024-07-21 12:21:13.637321] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:14.957 00:39:14.957 [2024-07-21 12:21:13.637380] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:15.215 00:39:15.215 real 0m0.878s 00:39:15.215 user 0m0.502s 00:39:15.215 sys 0m0.277s 00:39:15.215 12:21:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:15.215 12:21:13 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:15.215 ************************************ 00:39:15.215 END TEST bdev_hello_world 00:39:15.215 ************************************ 00:39:15.215 12:21:13 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:39:15.215 12:21:13 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:15.215 12:21:13 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:15.215 12:21:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:15.215 ************************************ 00:39:15.215 START TEST bdev_bounds 00:39:15.215 ************************************ 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=177361 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 177361' 00:39:15.215 Process bdevio pid: 177361 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 177361 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 177361 ']' 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:15.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:15.215 12:21:13 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:15.215 [2024-07-21 12:21:14.056486] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:15.215 [2024-07-21 12:21:14.056756] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177361 ] 00:39:15.473 [2024-07-21 12:21:14.234370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:15.473 [2024-07-21 12:21:14.319357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:15.473 [2024-07-21 12:21:14.319518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:15.473 [2024-07-21 12:21:14.319524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.405 12:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:16.405 12:21:14 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:39:16.405 12:21:14 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:16.405 I/O targets: 00:39:16.405 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:39:16.405 00:39:16.405 00:39:16.405 CUnit - A unit testing framework for C - Version 2.1-3 00:39:16.405 http://cunit.sourceforge.net/ 00:39:16.405 00:39:16.405 00:39:16.405 Suite: bdevio tests on: Nvme0n1 00:39:16.405 Test: blockdev write read block ...passed 00:39:16.405 Test: blockdev write zeroes read block ...passed 00:39:16.405 Test: blockdev write zeroes read no split ...passed 00:39:16.405 Test: blockdev write zeroes read split ...passed 00:39:16.406 Test: blockdev write zeroes read split partial ...passed 00:39:16.406 Test: blockdev reset ...[2024-07-21 12:21:15.054310] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:39:16.406 [2024-07-21 12:21:15.056565] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:16.406 passed 00:39:16.406 Test: blockdev write read 8 blocks ...passed 00:39:16.406 Test: blockdev write read size > 128k ...passed 00:39:16.406 Test: blockdev write read invalid size ...passed 00:39:16.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:16.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:16.406 Test: blockdev write read max offset ...passed 00:39:16.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:16.406 Test: blockdev writev readv 8 blocks ...passed 00:39:16.406 Test: blockdev writev readv 30 x 1block ...passed 00:39:16.406 Test: blockdev writev readv block ...passed 00:39:16.406 Test: blockdev writev readv size > 128k ...passed 00:39:16.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:16.406 Test: blockdev comparev and writev ...[2024-07-21 12:21:15.063893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x9300d000 len:0x1000 00:39:16.406 [2024-07-21 12:21:15.064030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:39:16.406 passed 00:39:16.406 Test: blockdev nvme passthru rw ...passed 00:39:16.406 Test: blockdev nvme passthru vendor specific ...[2024-07-21 12:21:15.064866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:39:16.406 [2024-07-21 12:21:15.064950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:39:16.406 passed 00:39:16.406 Test: blockdev nvme admin passthru ...passed 00:39:16.406 Test: blockdev copy ...passed 00:39:16.406 00:39:16.406 Run Summary: Type Total Ran Passed Failed Inactive 00:39:16.406 suites 1 1 n/a 0 0 00:39:16.406 tests 23 23 23 0 0 00:39:16.406 asserts 152 152 152 0 n/a 00:39:16.406 00:39:16.406 Elapsed time = 0.079 seconds 00:39:16.406 0 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 177361 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 177361 ']' 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 177361 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177361 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:16.406 killing process with pid 177361 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177361' 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 177361 00:39:16.406 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 177361 00:39:16.664 12:21:15 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:39:16.664 00:39:16.664 real 0m1.388s 00:39:16.664 user 0m3.289s 00:39:16.664 sys 0m0.393s 00:39:16.664 ************************************ 00:39:16.664 END TEST bdev_bounds 00:39:16.664 ************************************ 00:39:16.664 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:16.664 12:21:15 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:16.664 12:21:15 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:39:16.664 12:21:15 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:39:16.664 12:21:15 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:16.664 12:21:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:16.664 ************************************ 00:39:16.664 START TEST bdev_nbd 00:39:16.664 ************************************ 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1') 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1') 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=177407 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 177407 /var/tmp/spdk-nbd.sock 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 177407 ']' 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:16.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:16.664 12:21:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:16.664 [2024-07-21 12:21:15.498265] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:16.664 [2024-07-21 12:21:15.498975] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:16.922 [2024-07-21 12:21:15.646139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:16.922 [2024-07-21 12:21:15.713973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:17.855 1+0 records in 00:39:17.855 1+0 records out 00:39:17.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000820948 s, 5.0 MB/s 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:39:17.855 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:18.112 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:18.112 { 00:39:18.112 "nbd_device": "/dev/nbd0", 00:39:18.112 "bdev_name": "Nvme0n1" 00:39:18.112 } 00:39:18.112 ]' 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:18.113 { 00:39:18.113 "nbd_device": "/dev/nbd0", 00:39:18.113 "bdev_name": "Nvme0n1" 00:39:18.113 } 00:39:18.113 ]' 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:18.113 12:21:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:18.370 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:18.627 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:18.627 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:18.627 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:39:18.884 /dev/nbd0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:18.884 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:18.885 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:18.885 1+0 records in 00:39:18.885 1+0 records out 00:39:18.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465328 s, 8.8 MB/s 00:39:19.141 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:19.141 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:19.141 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:19.141 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:19.141 12:21:17 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:19.142 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:19.142 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:39:19.142 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:19.142 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:19.142 12:21:17 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:19.400 { 00:39:19.400 "nbd_device": "/dev/nbd0", 00:39:19.400 "bdev_name": "Nvme0n1" 00:39:19.400 } 00:39:19.400 ]' 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:19.400 { 00:39:19.400 "nbd_device": "/dev/nbd0", 00:39:19.400 "bdev_name": "Nvme0n1" 00:39:19.400 } 00:39:19.400 ]' 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:19.400 256+0 records in 00:39:19.400 256+0 records out 00:39:19.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00419392 s, 250 MB/s 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:19.400 256+0 records in 00:39:19.400 256+0 records out 00:39:19.400 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0561721 s, 18.7 MB/s 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:19.400 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:19.659 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:39:19.916 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:20.173 malloc_lvol_verify 00:39:20.173 12:21:18 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:20.431 eda7c22f-44ae-4ae4-9aca-80ffdea6aef3 00:39:20.431 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:20.688 c1ef6913-4cd8-480f-98aa-579fa62bd203 00:39:20.688 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:20.946 /dev/nbd0 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:39:20.946 mke2fs 1.46.5 (30-Dec-2021) 00:39:20.946 00:39:20.946 Filesystem too small for a journal 00:39:20.946 Discarding device blocks: 0/1024 done 00:39:20.946 Creating filesystem with 1024 4k blocks and 1024 inodes 00:39:20.946 00:39:20.946 Allocating group tables: 0/1 done 00:39:20.946 Writing inode tables: 0/1 done 00:39:20.946 Writing superblocks and filesystem accounting information: 0/1 done 00:39:20.946 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:20.946 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:21.204 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 177407 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 177407 ']' 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 177407 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177407 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:21.205 killing process with pid 177407 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177407' 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 177407 00:39:21.205 12:21:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 177407 00:39:21.463 12:21:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:39:21.463 00:39:21.463 real 0m4.851s 00:39:21.463 user 0m7.415s 00:39:21.463 sys 0m1.091s 00:39:21.463 12:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:21.463 12:21:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:21.463 ************************************ 00:39:21.463 END TEST bdev_nbd 00:39:21.463 ************************************ 00:39:21.721 12:21:20 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:39:21.721 12:21:20 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:39:21.721 skipping fio tests on NVMe due to multi-ns failures. 00:39:21.721 12:21:20 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:39:21.721 12:21:20 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:21.721 12:21:20 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:21.721 12:21:20 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:39:21.721 12:21:20 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:21.721 12:21:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:21.721 ************************************ 00:39:21.721 START TEST bdev_verify 00:39:21.721 ************************************ 00:39:21.721 12:21:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:21.721 [2024-07-21 12:21:20.400727] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:21.721 [2024-07-21 12:21:20.400939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177594 ] 00:39:21.721 [2024-07-21 12:21:20.557406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:21.979 [2024-07-21 12:21:20.632217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:21.979 [2024-07-21 12:21:20.632231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:22.236 Running I/O for 5 seconds... 00:39:27.538 00:39:27.538 Latency(us) 00:39:27.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:27.538 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:27.538 Verification LBA range: start 0x0 length 0xa0000 00:39:27.538 Nvme0n1 : 5.01 9007.99 35.19 0.00 0.00 14146.18 912.29 16681.89 00:39:27.538 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:27.538 Verification LBA range: start 0xa0000 length 0xa0000 00:39:27.538 Nvme0n1 : 5.01 9055.47 35.37 0.00 0.00 14070.91 305.34 15371.17 00:39:27.538 =================================================================================================================== 00:39:27.538 Total : 18063.46 70.56 0.00 0.00 14108.46 305.34 16681.89 00:39:27.538 00:39:27.538 real 0m6.006s 00:39:27.538 user 0m11.205s 00:39:27.538 sys 0m0.267s 00:39:27.538 12:21:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:27.538 ************************************ 00:39:27.538 12:21:26 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:39:27.538 END TEST bdev_verify 00:39:27.538 ************************************ 00:39:27.538 12:21:26 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:27.538 12:21:26 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:39:27.538 12:21:26 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:27.538 12:21:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:27.814 ************************************ 00:39:27.814 START TEST bdev_verify_big_io 00:39:27.814 ************************************ 00:39:27.814 12:21:26 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:27.814 [2024-07-21 12:21:26.461063] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:27.814 [2024-07-21 12:21:26.461317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177685 ] 00:39:27.814 [2024-07-21 12:21:26.631800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:28.073 [2024-07-21 12:21:26.710508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.073 [2024-07-21 12:21:26.710523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.331 Running I/O for 5 seconds... 00:39:33.606 00:39:33.606 Latency(us) 00:39:33.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:33.606 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:39:33.606 Verification LBA range: start 0x0 length 0xa000 00:39:33.606 Nvme0n1 : 5.05 1191.65 74.48 0.00 0.00 105649.07 214.11 131548.63 00:39:33.606 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:39:33.606 Verification LBA range: start 0xa000 length 0xa000 00:39:33.606 Nvme0n1 : 5.06 936.74 58.55 0.00 0.00 133638.08 238.31 140127.88 00:39:33.606 =================================================================================================================== 00:39:33.606 Total : 2128.40 133.02 0.00 0.00 117977.56 214.11 140127.88 00:39:33.863 00:39:33.863 real 0m6.251s 00:39:33.863 user 0m11.642s 00:39:33.863 sys 0m0.291s 00:39:33.863 12:21:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:33.863 12:21:32 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:39:33.863 ************************************ 00:39:33.863 END TEST bdev_verify_big_io 00:39:33.863 ************************************ 00:39:33.863 12:21:32 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:33.863 12:21:32 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:33.863 12:21:32 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:33.863 12:21:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:33.863 ************************************ 00:39:33.863 START TEST bdev_write_zeroes 00:39:33.863 ************************************ 00:39:33.863 12:21:32 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:34.121 [2024-07-21 12:21:32.766356] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:34.121 [2024-07-21 12:21:32.766608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177774 ] 00:39:34.121 [2024-07-21 12:21:32.934955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.378 [2024-07-21 12:21:33.015796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:34.634 Running I/O for 1 seconds... 00:39:35.564 00:39:35.564 Latency(us) 00:39:35.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:35.564 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:39:35.564 Nvme0n1 : 1.00 65670.75 256.53 0.00 0.00 1943.92 573.44 13941.29 00:39:35.564 =================================================================================================================== 00:39:35.564 Total : 65670.75 256.53 0.00 0.00 1943.92 573.44 13941.29 00:39:35.822 00:39:35.822 real 0m1.865s 00:39:35.822 user 0m1.504s 00:39:35.822 sys 0m0.261s 00:39:35.822 12:21:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:35.822 ************************************ 00:39:35.822 END TEST bdev_write_zeroes 00:39:35.822 ************************************ 00:39:35.822 12:21:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:39:35.822 12:21:34 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:35.822 12:21:34 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:35.822 12:21:34 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:35.822 12:21:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:35.822 ************************************ 00:39:35.822 START TEST bdev_json_nonenclosed 00:39:35.822 ************************************ 00:39:35.822 12:21:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:35.822 [2024-07-21 12:21:34.660547] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:35.822 [2024-07-21 12:21:34.660756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177826 ] 00:39:36.080 [2024-07-21 12:21:34.810359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.080 [2024-07-21 12:21:34.878496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.080 [2024-07-21 12:21:34.878662] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:39:36.080 [2024-07-21 12:21:34.878706] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:36.080 [2024-07-21 12:21:34.878735] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:36.337 00:39:36.337 real 0m0.372s 00:39:36.337 user 0m0.154s 00:39:36.337 sys 0m0.118s 00:39:36.337 12:21:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:36.337 ************************************ 00:39:36.337 END TEST bdev_json_nonenclosed 00:39:36.337 ************************************ 00:39:36.337 12:21:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:39:36.337 12:21:35 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:36.337 12:21:35 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:39:36.337 12:21:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:36.337 12:21:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:36.337 ************************************ 00:39:36.337 START TEST bdev_json_nonarray 00:39:36.337 ************************************ 00:39:36.337 12:21:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:39:36.337 [2024-07-21 12:21:35.080578] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:36.337 [2024-07-21 12:21:35.080764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177850 ] 00:39:36.594 [2024-07-21 12:21:35.231657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.594 [2024-07-21 12:21:35.299516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.594 [2024-07-21 12:21:35.299674] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:39:36.594 [2024-07-21 12:21:35.299726] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:39:36.594 [2024-07-21 12:21:35.299748] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:39:36.594 00:39:36.594 real 0m0.381s 00:39:36.595 user 0m0.165s 00:39:36.595 sys 0m0.116s 00:39:36.595 12:21:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:36.595 12:21:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:39:36.595 ************************************ 00:39:36.595 END TEST bdev_json_nonarray 00:39:36.595 ************************************ 00:39:36.595 12:21:35 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:39:36.595 12:21:35 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:39:36.595 12:21:35 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:39:36.595 12:21:35 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:39:36.595 12:21:35 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:39:36.595 12:21:35 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:39:36.595 12:21:35 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:36.852 12:21:35 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:39:36.852 12:21:35 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:39:36.852 12:21:35 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:39:36.852 12:21:35 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:39:36.852 00:39:36.852 real 0m24.401s 00:39:36.852 user 0m38.243s 00:39:36.852 sys 0m3.533s 00:39:36.852 12:21:35 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:36.852 12:21:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:39:36.852 ************************************ 00:39:36.852 END TEST blockdev_nvme 00:39:36.852 ************************************ 00:39:36.852 12:21:35 -- spdk/autotest.sh@213 -- # uname -s 00:39:36.852 12:21:35 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:39:36.852 12:21:35 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:39:36.852 12:21:35 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:36.852 12:21:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:36.852 12:21:35 -- common/autotest_common.sh@10 -- # set +x 00:39:36.852 ************************************ 00:39:36.852 START TEST blockdev_nvme_gpt 00:39:36.852 ************************************ 00:39:36.852 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:39:36.852 * Looking for test storage... 00:39:36.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=177928 00:39:36.852 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:39:36.853 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:39:36.853 12:21:35 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 177928 00:39:36.853 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 177928 ']' 00:39:36.853 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:36.853 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:36.853 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:36.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:36.853 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:36.853 12:21:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:36.853 [2024-07-21 12:21:35.654871] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:36.853 [2024-07-21 12:21:35.655090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid177928 ] 00:39:37.111 [2024-07-21 12:21:35.802439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.111 [2024-07-21 12:21:35.872156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:38.045 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:38.045 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:39:38.045 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:39:38.045 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:39:38.045 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:39:38.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:38.045 Waiting for block devices as requested 00:39:38.045 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:39:38.303 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:39:38.303 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:39:38.303 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:39:38.304 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:39:38.304 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:39:38.304 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:39:38.304 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:39:38.304 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:38.304 12:21:36 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme0/nvme0n1') 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme0n1 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme0n1 -ms print 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:39:38.304 BYT; 00:39:38.304 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:39:38.304 BYT; 00:39:38.304 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme0n1 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme0n1 ]] 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:39:38.304 12:21:36 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:39:38.562 12:21:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:38.562 12:21:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:39:38.562 12:21:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:38.562 12:21:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:38.562 12:21:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:39:38.562 12:21:37 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:39:39.496 The operation has completed successfully. 00:39:39.496 12:21:38 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:39:40.870 The operation has completed successfully. 00:39:40.870 12:21:39 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:39:40.870 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:39:41.127 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.058 [] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:39:42.058 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:39:42.058 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:39:42.317 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:39:42.317 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:39:42.317 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:39:42.317 12:21:40 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 177928 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 177928 ']' 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 177928 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 177928 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 177928' 00:39:42.317 killing process with pid 177928 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 177928 00:39:42.317 12:21:40 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 177928 00:39:42.883 12:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:42.883 12:21:41 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:39:42.883 12:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:39:42.883 12:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:42.883 12:21:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:42.883 ************************************ 00:39:42.883 START TEST bdev_hello_world 00:39:42.883 ************************************ 00:39:42.883 12:21:41 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:39:42.883 [2024-07-21 12:21:41.587504] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:42.883 [2024-07-21 12:21:41.587659] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178339 ] 00:39:42.883 [2024-07-21 12:21:41.736225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:43.141 [2024-07-21 12:21:41.817697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:43.398 [2024-07-21 12:21:42.054772] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:39:43.398 [2024-07-21 12:21:42.054845] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:39:43.398 [2024-07-21 12:21:42.054896] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:39:43.398 [2024-07-21 12:21:42.057133] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:39:43.398 [2024-07-21 12:21:42.057688] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:39:43.398 [2024-07-21 12:21:42.057744] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:39:43.398 [2024-07-21 12:21:42.057965] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:39:43.398 00:39:43.398 [2024-07-21 12:21:42.058015] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:39:43.656 ************************************ 00:39:43.656 END TEST bdev_hello_world 00:39:43.656 ************************************ 00:39:43.656 00:39:43.656 real 0m0.820s 00:39:43.656 user 0m0.504s 00:39:43.656 sys 0m0.217s 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:39:43.656 12:21:42 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:39:43.656 12:21:42 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:39:43.656 12:21:42 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:43.656 12:21:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:43.656 ************************************ 00:39:43.656 START TEST bdev_bounds 00:39:43.656 ************************************ 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=178368 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 178368' 00:39:43.656 Process bdevio pid: 178368 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 178368 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 178368 ']' 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:43.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:43.656 12:21:42 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:43.656 [2024-07-21 12:21:42.480656] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:43.656 [2024-07-21 12:21:42.480878] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178368 ] 00:39:43.914 [2024-07-21 12:21:42.654867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:43.914 [2024-07-21 12:21:42.724359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:43.914 [2024-07-21 12:21:42.724508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:39:43.914 [2024-07-21 12:21:42.724528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:44.479 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:44.479 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:39:44.479 12:21:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:39:44.737 I/O targets: 00:39:44.737 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:39:44.737 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:39:44.737 00:39:44.737 00:39:44.737 CUnit - A unit testing framework for C - Version 2.1-3 00:39:44.737 http://cunit.sourceforge.net/ 00:39:44.737 00:39:44.737 00:39:44.737 Suite: bdevio tests on: Nvme0n1p2 00:39:44.737 Test: blockdev write read block ...passed 00:39:44.737 Test: blockdev write zeroes read block ...passed 00:39:44.737 Test: blockdev write zeroes read no split ...passed 00:39:44.737 Test: blockdev write zeroes read split ...passed 00:39:44.737 Test: blockdev write zeroes read split partial ...passed 00:39:44.737 Test: blockdev reset ...[2024-07-21 12:21:43.463416] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:39:44.737 [2024-07-21 12:21:43.466039] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:44.737 passed 00:39:44.738 Test: blockdev write read 8 blocks ...passed 00:39:44.738 Test: blockdev write read size > 128k ...passed 00:39:44.738 Test: blockdev write read invalid size ...passed 00:39:44.738 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.738 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.738 Test: blockdev write read max offset ...passed 00:39:44.738 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.738 Test: blockdev writev readv 8 blocks ...passed 00:39:44.738 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.738 Test: blockdev writev readv block ...passed 00:39:44.738 Test: blockdev writev readv size > 128k ...passed 00:39:44.738 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.738 Test: blockdev comparev and writev ...[2024-07-21 12:21:43.472957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xa4a0b000 len:0x1000 00:39:44.738 [2024-07-21 12:21:43.473091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:39:44.738 passed 00:39:44.738 Test: blockdev nvme passthru rw ...passed 00:39:44.738 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.738 Test: blockdev nvme admin passthru ...passed 00:39:44.738 Test: blockdev copy ...passed 00:39:44.738 Suite: bdevio tests on: Nvme0n1p1 00:39:44.738 Test: blockdev write read block ...passed 00:39:44.738 Test: blockdev write zeroes read block ...passed 00:39:44.738 Test: blockdev write zeroes read no split ...passed 00:39:44.738 Test: blockdev write zeroes read split ...passed 00:39:44.738 Test: blockdev write zeroes read split partial ...passed 00:39:44.738 Test: blockdev reset ...[2024-07-21 12:21:43.487559] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:39:44.738 passed 00:39:44.738 Test: blockdev write read 8 blocks ...[2024-07-21 12:21:43.489641] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:39:44.738 passed 00:39:44.738 Test: blockdev write read size > 128k ...passed 00:39:44.738 Test: blockdev write read invalid size ...passed 00:39:44.738 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:44.738 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:44.738 Test: blockdev write read max offset ...passed 00:39:44.738 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:44.738 Test: blockdev writev readv 8 blocks ...passed 00:39:44.738 Test: blockdev writev readv 30 x 1block ...passed 00:39:44.738 Test: blockdev writev readv block ...passed 00:39:44.738 Test: blockdev writev readv size > 128k ...passed 00:39:44.738 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:44.738 Test: blockdev comparev and writev ...[2024-07-21 12:21:43.496215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xa4a0d000 len:0x1000 00:39:44.738 [2024-07-21 12:21:43.496308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:39:44.738 passed 00:39:44.738 Test: blockdev nvme passthru rw ...passed 00:39:44.738 Test: blockdev nvme passthru vendor specific ...passed 00:39:44.738 Test: blockdev nvme admin passthru ...passed 00:39:44.738 Test: blockdev copy ...passed 00:39:44.738 00:39:44.738 Run Summary: Type Total Ran Passed Failed Inactive 00:39:44.738 suites 2 2 n/a 0 0 00:39:44.738 tests 46 46 46 0 0 00:39:44.738 asserts 284 284 284 0 n/a 00:39:44.738 00:39:44.738 Elapsed time = 0.112 seconds 00:39:44.738 0 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 178368 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 178368 ']' 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 178368 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 178368 00:39:44.738 killing process with pid 178368 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 178368' 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 178368 00:39:44.738 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 178368 00:39:44.996 ************************************ 00:39:44.996 END TEST bdev_bounds 00:39:44.996 ************************************ 00:39:44.996 12:21:43 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:39:44.996 00:39:44.996 real 0m1.415s 00:39:44.996 user 0m3.385s 00:39:44.996 sys 0m0.316s 00:39:44.996 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:44.996 12:21:43 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:39:45.254 12:21:43 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:39:45.254 12:21:43 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:39:45.254 12:21:43 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:45.254 12:21:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:45.254 ************************************ 00:39:45.254 START TEST bdev_nbd 00:39:45.254 ************************************ 00:39:45.254 12:21:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:39:45.254 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:39:45.254 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:39:45.254 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:45.254 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:39:45.254 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=2 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=2 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=178425 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 178425 /var/tmp/spdk-nbd.sock 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 178425 ']' 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:39:45.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:39:45.255 12:21:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:45.255 [2024-07-21 12:21:43.950768] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:45.255 [2024-07-21 12:21:43.951003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:45.255 [2024-07-21 12:21:44.098152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.513 [2024-07-21 12:21:44.170561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:46.078 12:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:39:46.078 12:21:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:39:46.078 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:46.079 12:21:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:46.337 1+0 records in 00:39:46.337 1+0 records out 00:39:46.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583254 s, 7.0 MB/s 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:46.337 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:46.595 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:46.595 1+0 records in 00:39:46.595 1+0 records out 00:39:46.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596183 s, 6.9 MB/s 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:39:46.596 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:46.854 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:39:46.854 { 00:39:46.854 "nbd_device": "/dev/nbd0", 00:39:46.854 "bdev_name": "Nvme0n1p1" 00:39:46.854 }, 00:39:46.854 { 00:39:46.854 "nbd_device": "/dev/nbd1", 00:39:46.854 "bdev_name": "Nvme0n1p2" 00:39:46.854 } 00:39:46.854 ]' 00:39:46.854 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:39:46.854 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:39:46.854 { 00:39:46.854 "nbd_device": "/dev/nbd0", 00:39:46.854 "bdev_name": "Nvme0n1p1" 00:39:46.854 }, 00:39:46.854 { 00:39:46.854 "nbd_device": "/dev/nbd1", 00:39:46.854 "bdev_name": "Nvme0n1p2" 00:39:46.854 } 00:39:46.854 ]' 00:39:46.854 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:39:47.113 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:47.113 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:47.113 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:47.113 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:47.113 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:47.113 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:47.113 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:47.371 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:47.372 12:21:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:47.372 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:47.630 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:39:47.888 /dev/nbd0 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:39:47.888 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:47.889 1+0 records in 00:39:47.889 1+0 records out 00:39:47.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505761 s, 8.1 MB/s 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:47.889 12:21:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:39:48.147 /dev/nbd1 00:39:48.147 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:39:48.147 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:39:48.147 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:39:48.147 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:39:48.147 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:39:48.147 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:39:48.147 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:39:48.406 1+0 records in 00:39:48.406 1+0 records out 00:39:48.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081335 s, 5.0 MB/s 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:39:48.406 { 00:39:48.406 "nbd_device": "/dev/nbd0", 00:39:48.406 "bdev_name": "Nvme0n1p1" 00:39:48.406 }, 00:39:48.406 { 00:39:48.406 "nbd_device": "/dev/nbd1", 00:39:48.406 "bdev_name": "Nvme0n1p2" 00:39:48.406 } 00:39:48.406 ]' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:39:48.406 { 00:39:48.406 "nbd_device": "/dev/nbd0", 00:39:48.406 "bdev_name": "Nvme0n1p1" 00:39:48.406 }, 00:39:48.406 { 00:39:48.406 "nbd_device": "/dev/nbd1", 00:39:48.406 "bdev_name": "Nvme0n1p2" 00:39:48.406 } 00:39:48.406 ]' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:39:48.406 /dev/nbd1' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:39:48.406 /dev/nbd1' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:39:48.406 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:39:48.665 256+0 records in 00:39:48.665 256+0 records out 00:39:48.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00900025 s, 117 MB/s 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:39:48.665 256+0 records in 00:39:48.665 256+0 records out 00:39:48.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0746543 s, 14.0 MB/s 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:39:48.665 256+0 records in 00:39:48.665 256+0 records out 00:39:48.665 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0800993 s, 13.1 MB/s 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:48.665 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:48.666 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:48.666 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:48.666 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:48.666 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:48.666 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:48.924 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:39:49.181 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:39:49.181 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:39:49.181 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:39:49.181 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:49.182 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:49.182 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:39:49.182 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:49.182 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:49.182 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:39:49.182 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:49.182 12:21:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:39:49.452 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:39:49.713 malloc_lvol_verify 00:39:49.713 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:39:49.970 6848e502-2994-4053-9602-d9f7743c67d4 00:39:49.970 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:39:50.226 8986c36b-8e3d-4a50-94eb-4e9361fee9e4 00:39:50.227 12:21:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:39:50.483 /dev/nbd0 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:39:50.483 mke2fs 1.46.5 (30-Dec-2021) 00:39:50.483 00:39:50.483 Filesystem too small for a journal 00:39:50.483 Discarding device blocks: 0/1024 done 00:39:50.483 Creating filesystem with 1024 4k blocks and 1024 inodes 00:39:50.483 00:39:50.483 Allocating group tables: 0/1 done 00:39:50.483 Writing inode tables: 0/1 done 00:39:50.483 Writing superblocks and filesystem accounting information: 0/1 done 00:39:50.483 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:39:50.483 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 178425 00:39:50.739 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 178425 ']' 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 178425 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 178425 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 178425' 00:39:50.740 killing process with pid 178425 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 178425 00:39:50.740 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 178425 00:39:50.997 12:21:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:39:50.997 00:39:50.997 real 0m5.886s 00:39:50.997 user 0m8.891s 00:39:50.997 sys 0m1.527s 00:39:50.997 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:50.997 12:21:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:39:50.997 ************************************ 00:39:50.997 END TEST bdev_nbd 00:39:50.997 ************************************ 00:39:50.997 12:21:49 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:39:50.997 12:21:49 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:39:50.997 12:21:49 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:39:50.997 skipping fio tests on NVMe due to multi-ns failures. 00:39:50.997 12:21:49 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:39:50.997 12:21:49 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:39:50.997 12:21:49 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:50.997 12:21:49 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:39:50.997 12:21:49 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:50.997 12:21:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:50.997 ************************************ 00:39:50.997 START TEST bdev_verify 00:39:50.997 ************************************ 00:39:50.997 12:21:49 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:39:51.255 [2024-07-21 12:21:49.894520] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:51.255 [2024-07-21 12:21:49.894787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178667 ] 00:39:51.255 [2024-07-21 12:21:50.065100] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:51.512 [2024-07-21 12:21:50.142337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:51.512 [2024-07-21 12:21:50.142349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:51.770 Running I/O for 5 seconds... 00:39:57.056 00:39:57.056 Latency(us) 00:39:57.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:57.056 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:57.056 Verification LBA range: start 0x0 length 0x4ff80 00:39:57.056 Nvme0n1p1 : 5.02 4088.72 15.97 0.00 0.00 31197.62 3336.38 25499.46 00:39:57.056 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:57.056 Verification LBA range: start 0x4ff80 length 0x4ff80 00:39:57.056 Nvme0n1p1 : 5.02 4024.83 15.72 0.00 0.00 31721.56 6791.91 31695.59 00:39:57.056 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:39:57.056 Verification LBA range: start 0x0 length 0x4ff7f 00:39:57.056 Nvme0n1p2 : 5.03 4096.45 16.00 0.00 0.00 31132.31 1042.62 27405.96 00:39:57.056 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:39:57.056 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:39:57.056 Nvme0n1p2 : 5.03 4021.68 15.71 0.00 0.00 31662.57 5421.61 32172.22 00:39:57.056 =================================================================================================================== 00:39:57.056 Total : 16231.68 63.41 0.00 0.00 31426.21 1042.62 32172.22 00:39:57.314 00:39:57.314 real 0m6.128s 00:39:57.314 user 0m11.408s 00:39:57.314 sys 0m0.281s 00:39:57.314 12:21:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:39:57.314 12:21:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:39:57.314 ************************************ 00:39:57.314 END TEST bdev_verify 00:39:57.314 ************************************ 00:39:57.314 12:21:56 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:57.314 12:21:56 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:39:57.314 12:21:56 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:39:57.314 12:21:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:39:57.314 ************************************ 00:39:57.314 START TEST bdev_verify_big_io 00:39:57.314 ************************************ 00:39:57.314 12:21:56 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:39:57.314 [2024-07-21 12:21:56.071366] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:39:57.314 [2024-07-21 12:21:56.071720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178758 ] 00:39:57.572 [2024-07-21 12:21:56.227366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:39:57.572 [2024-07-21 12:21:56.299150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.572 [2024-07-21 12:21:56.299166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.830 Running I/O for 5 seconds... 00:40:03.090 00:40:03.090 Latency(us) 00:40:03.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:03.090 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:03.090 Verification LBA range: start 0x0 length 0x4ff8 00:40:03.090 Nvme0n1p1 : 5.11 726.97 45.44 0.00 0.00 173584.52 3842.79 170631.91 00:40:03.090 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:03.090 Verification LBA range: start 0x4ff8 length 0x4ff8 00:40:03.090 Nvme0n1p1 : 5.15 472.20 29.51 0.00 0.00 265803.80 16801.05 293601.28 00:40:03.090 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:40:03.090 Verification LBA range: start 0x0 length 0x4ff7 00:40:03.090 Nvme0n1p2 : 5.13 735.72 45.98 0.00 0.00 168983.28 1161.77 158239.65 00:40:03.090 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:40:03.090 Verification LBA range: start 0x4ff7 length 0x4ff7 00:40:03.090 Nvme0n1p2 : 5.18 493.89 30.87 0.00 0.00 246965.57 189.91 215434.71 00:40:03.090 =================================================================================================================== 00:40:03.090 Total : 2428.77 151.80 0.00 0.00 205221.61 189.91 293601.28 00:40:03.656 00:40:03.656 real 0m6.300s 00:40:03.656 user 0m11.815s 00:40:03.656 sys 0m0.269s 00:40:03.656 12:22:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:03.656 12:22:02 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:40:03.656 ************************************ 00:40:03.656 END TEST bdev_verify_big_io 00:40:03.656 ************************************ 00:40:03.656 12:22:02 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:03.656 12:22:02 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:40:03.656 12:22:02 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:03.656 12:22:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:03.656 ************************************ 00:40:03.656 START TEST bdev_write_zeroes 00:40:03.656 ************************************ 00:40:03.656 12:22:02 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:03.656 [2024-07-21 12:22:02.431987] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:03.656 [2024-07-21 12:22:02.432231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178853 ] 00:40:03.914 [2024-07-21 12:22:02.599620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:03.914 [2024-07-21 12:22:02.666652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:04.172 Running I/O for 1 seconds... 00:40:05.111 00:40:05.111 Latency(us) 00:40:05.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:05.111 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:05.111 Nvme0n1p1 : 1.01 26091.62 101.92 0.00 0.00 4896.02 2353.34 15609.48 00:40:05.111 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:40:05.111 Nvme0n1p2 : 1.01 25996.18 101.55 0.00 0.00 4906.71 2293.76 15728.64 00:40:05.111 =================================================================================================================== 00:40:05.111 Total : 52087.80 203.47 0.00 0.00 4901.36 2293.76 15728.64 00:40:05.368 00:40:05.368 real 0m1.856s 00:40:05.368 user 0m1.516s 00:40:05.368 sys 0m0.240s 00:40:05.368 12:22:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:05.368 12:22:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:40:05.368 ************************************ 00:40:05.368 END TEST bdev_write_zeroes 00:40:05.368 ************************************ 00:40:05.625 12:22:04 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:05.625 12:22:04 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:40:05.625 12:22:04 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:05.625 12:22:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:05.625 ************************************ 00:40:05.625 START TEST bdev_json_nonenclosed 00:40:05.625 ************************************ 00:40:05.625 12:22:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:05.625 [2024-07-21 12:22:04.335697] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:05.625 [2024-07-21 12:22:04.335914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178898 ] 00:40:05.882 [2024-07-21 12:22:04.499839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:05.882 [2024-07-21 12:22:04.573878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.882 [2024-07-21 12:22:04.574333] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:40:05.882 [2024-07-21 12:22:04.574492] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:05.883 [2024-07-21 12:22:04.574560] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:05.883 00:40:05.883 real 0m0.404s 00:40:05.883 user 0m0.178s 00:40:05.883 sys 0m0.123s 00:40:05.883 12:22:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:05.883 12:22:04 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:40:05.883 ************************************ 00:40:05.883 END TEST bdev_json_nonenclosed 00:40:05.883 ************************************ 00:40:05.883 12:22:04 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:05.883 12:22:04 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:40:05.883 12:22:04 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:05.883 12:22:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:05.883 ************************************ 00:40:05.883 START TEST bdev_json_nonarray 00:40:05.883 ************************************ 00:40:05.883 12:22:04 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:40:06.140 [2024-07-21 12:22:04.790614] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:06.140 [2024-07-21 12:22:04.790862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178921 ] 00:40:06.140 [2024-07-21 12:22:04.956460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.399 [2024-07-21 12:22:05.033650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.399 [2024-07-21 12:22:05.034081] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:40:06.399 [2024-07-21 12:22:05.034248] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:40:06.399 [2024-07-21 12:22:05.034370] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:40:06.399 00:40:06.399 real 0m0.406s 00:40:06.399 user 0m0.186s 00:40:06.399 sys 0m0.120s 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:40:06.399 ************************************ 00:40:06.399 END TEST bdev_json_nonarray 00:40:06.399 ************************************ 00:40:06.399 12:22:05 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:40:06.399 12:22:05 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:40:06.399 12:22:05 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:40:06.399 12:22:05 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:06.399 12:22:05 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:06.399 12:22:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:06.399 ************************************ 00:40:06.399 START TEST bdev_gpt_uuid 00:40:06.399 ************************************ 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=178954 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 178954 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 178954 ']' 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:06.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:06.399 12:22:05 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:06.399 [2024-07-21 12:22:05.251028] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:06.399 [2024-07-21 12:22:05.251217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid178954 ] 00:40:06.657 [2024-07-21 12:22:05.394690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.657 [2024-07-21 12:22:05.462760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:07.589 Some configs were skipped because the RPC state that can call them passed over. 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:40:07.589 { 00:40:07.589 "name": "Nvme0n1p1", 00:40:07.589 "aliases": [ 00:40:07.589 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:40:07.589 ], 00:40:07.589 "product_name": "GPT Disk", 00:40:07.589 "block_size": 4096, 00:40:07.589 "num_blocks": 655104, 00:40:07.589 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:40:07.589 "assigned_rate_limits": { 00:40:07.589 "rw_ios_per_sec": 0, 00:40:07.589 "rw_mbytes_per_sec": 0, 00:40:07.589 "r_mbytes_per_sec": 0, 00:40:07.589 "w_mbytes_per_sec": 0 00:40:07.589 }, 00:40:07.589 "claimed": false, 00:40:07.589 "zoned": false, 00:40:07.589 "supported_io_types": { 00:40:07.589 "read": true, 00:40:07.589 "write": true, 00:40:07.589 "unmap": true, 00:40:07.589 "write_zeroes": true, 00:40:07.589 "flush": true, 00:40:07.589 "reset": true, 00:40:07.589 "compare": true, 00:40:07.589 "compare_and_write": false, 00:40:07.589 "abort": true, 00:40:07.589 "nvme_admin": false, 00:40:07.589 "nvme_io": false 00:40:07.589 }, 00:40:07.589 "driver_specific": { 00:40:07.589 "gpt": { 00:40:07.589 "base_bdev": "Nvme0n1", 00:40:07.589 "offset_blocks": 256, 00:40:07.589 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:40:07.589 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:40:07.589 "partition_name": "SPDK_TEST_first" 00:40:07.589 } 00:40:07.589 } 00:40:07.589 } 00:40:07.589 ]' 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:40:07.589 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:40:07.847 { 00:40:07.847 "name": "Nvme0n1p2", 00:40:07.847 "aliases": [ 00:40:07.847 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:40:07.847 ], 00:40:07.847 "product_name": "GPT Disk", 00:40:07.847 "block_size": 4096, 00:40:07.847 "num_blocks": 655103, 00:40:07.847 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:40:07.847 "assigned_rate_limits": { 00:40:07.847 "rw_ios_per_sec": 0, 00:40:07.847 "rw_mbytes_per_sec": 0, 00:40:07.847 "r_mbytes_per_sec": 0, 00:40:07.847 "w_mbytes_per_sec": 0 00:40:07.847 }, 00:40:07.847 "claimed": false, 00:40:07.847 "zoned": false, 00:40:07.847 "supported_io_types": { 00:40:07.847 "read": true, 00:40:07.847 "write": true, 00:40:07.847 "unmap": true, 00:40:07.847 "write_zeroes": true, 00:40:07.847 "flush": true, 00:40:07.847 "reset": true, 00:40:07.847 "compare": true, 00:40:07.847 "compare_and_write": false, 00:40:07.847 "abort": true, 00:40:07.847 "nvme_admin": false, 00:40:07.847 "nvme_io": false 00:40:07.847 }, 00:40:07.847 "driver_specific": { 00:40:07.847 "gpt": { 00:40:07.847 "base_bdev": "Nvme0n1", 00:40:07.847 "offset_blocks": 655360, 00:40:07.847 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:40:07.847 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:40:07.847 "partition_name": "SPDK_TEST_second" 00:40:07.847 } 00:40:07.847 } 00:40:07.847 } 00:40:07.847 ]' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 178954 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 178954 ']' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 178954 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 178954 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 178954' 00:40:07.847 killing process with pid 178954 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 178954 00:40:07.847 12:22:06 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 178954 00:40:08.415 00:40:08.415 real 0m2.055s 00:40:08.415 user 0m2.313s 00:40:08.415 sys 0m0.465s 00:40:08.415 12:22:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:08.415 12:22:07 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:40:08.415 ************************************ 00:40:08.415 END TEST bdev_gpt_uuid 00:40:08.415 ************************************ 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:40:08.674 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:08.933 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:08.933 Waiting for block devices as requested 00:40:08.933 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:08.933 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:40:08.933 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:40:08.933 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:40:08.933 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:40:08.933 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:40:08.933 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:40:08.933 12:22:07 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:40:08.933 00:40:08.933 real 0m32.273s 00:40:08.933 user 0m47.323s 00:40:08.933 sys 0m5.741s 00:40:08.933 12:22:07 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:08.933 12:22:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:40:08.933 ************************************ 00:40:08.933 END TEST blockdev_nvme_gpt 00:40:08.933 ************************************ 00:40:09.192 12:22:07 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:40:09.192 12:22:07 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:09.192 12:22:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:09.192 12:22:07 -- common/autotest_common.sh@10 -- # set +x 00:40:09.192 ************************************ 00:40:09.192 START TEST nvme 00:40:09.192 ************************************ 00:40:09.192 12:22:07 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:40:09.192 * Looking for test storage... 00:40:09.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:09.192 12:22:07 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:09.450 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:09.709 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:10.645 12:22:09 nvme -- nvme/nvme.sh@79 -- # uname 00:40:10.645 12:22:09 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:40:10.645 12:22:09 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:40:10.645 12:22:09 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1067 -- # stubpid=179345 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:40:10.645 Waiting for stub to ready for secondary processes... 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/179345 ]] 00:40:10.645 12:22:09 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:40:10.904 [2024-07-21 12:22:09.516697] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:10.905 [2024-07-21 12:22:09.516977] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:40:11.842 12:22:10 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:40:11.842 12:22:10 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/179345 ]] 00:40:11.842 12:22:10 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:40:12.409 [2024-07-21 12:22:11.169788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:12.409 [2024-07-21 12:22:11.237284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:12.409 [2024-07-21 12:22:11.237418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:12.409 [2024-07-21 12:22:11.237422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:12.409 [2024-07-21 12:22:11.245035] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:40:12.409 [2024-07-21 12:22:11.245139] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:40:12.409 [2024-07-21 12:22:11.256028] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:40:12.409 [2024-07-21 12:22:11.256278] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:40:12.668 12:22:11 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:40:12.668 12:22:11 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:40:12.668 done. 00:40:12.668 12:22:11 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:40:12.668 12:22:11 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:40:12.668 12:22:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:12.668 12:22:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:12.668 ************************************ 00:40:12.668 START TEST nvme_reset 00:40:12.668 ************************************ 00:40:12.668 12:22:11 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:40:12.926 Initializing NVMe Controllers 00:40:12.926 Skipping QEMU NVMe SSD at 0000:00:10.0 00:40:12.926 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:40:12.926 00:40:12.926 real 0m0.290s 00:40:12.926 user 0m0.081s 00:40:12.926 sys 0m0.124s 00:40:12.926 12:22:11 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:12.926 12:22:11 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:40:12.926 ************************************ 00:40:12.927 END TEST nvme_reset 00:40:12.927 ************************************ 00:40:13.185 12:22:11 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:40:13.185 12:22:11 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:13.185 12:22:11 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:13.185 12:22:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:13.185 ************************************ 00:40:13.185 START TEST nvme_identify 00:40:13.185 ************************************ 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:40:13.185 12:22:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:40:13.185 12:22:11 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:40:13.185 12:22:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:40:13.185 12:22:11 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:13.185 12:22:11 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:13.185 12:22:11 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:40:13.443 [2024-07-21 12:22:12.104236] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 179384 terminated unexpected 00:40:13.443 ===================================================== 00:40:13.443 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:13.443 ===================================================== 00:40:13.443 Controller Capabilities/Features 00:40:13.443 ================================ 00:40:13.443 Vendor ID: 1b36 00:40:13.443 Subsystem Vendor ID: 1af4 00:40:13.443 Serial Number: 12340 00:40:13.443 Model Number: QEMU NVMe Ctrl 00:40:13.443 Firmware Version: 8.0.0 00:40:13.443 Recommended Arb Burst: 6 00:40:13.443 IEEE OUI Identifier: 00 54 52 00:40:13.443 Multi-path I/O 00:40:13.443 May have multiple subsystem ports: No 00:40:13.443 May have multiple controllers: No 00:40:13.443 Associated with SR-IOV VF: No 00:40:13.443 Max Data Transfer Size: 524288 00:40:13.443 Max Number of Namespaces: 256 00:40:13.443 Max Number of I/O Queues: 64 00:40:13.443 NVMe Specification Version (VS): 1.4 00:40:13.443 NVMe Specification Version (Identify): 1.4 00:40:13.443 Maximum Queue Entries: 2048 00:40:13.443 Contiguous Queues Required: Yes 00:40:13.443 Arbitration Mechanisms Supported 00:40:13.443 Weighted Round Robin: Not Supported 00:40:13.443 Vendor Specific: Not Supported 00:40:13.443 Reset Timeout: 7500 ms 00:40:13.443 Doorbell Stride: 4 bytes 00:40:13.443 NVM Subsystem Reset: Not Supported 00:40:13.443 Command Sets Supported 00:40:13.443 NVM Command Set: Supported 00:40:13.443 Boot Partition: Not Supported 00:40:13.443 Memory Page Size Minimum: 4096 bytes 00:40:13.443 Memory Page Size Maximum: 65536 bytes 00:40:13.443 Persistent Memory Region: Not Supported 00:40:13.443 Optional Asynchronous Events Supported 00:40:13.443 Namespace Attribute Notices: Supported 00:40:13.443 Firmware Activation Notices: Not Supported 00:40:13.443 ANA Change Notices: Not Supported 00:40:13.443 PLE Aggregate Log Change Notices: Not Supported 00:40:13.443 LBA Status Info Alert Notices: Not Supported 00:40:13.443 EGE Aggregate Log Change Notices: Not Supported 00:40:13.443 Normal NVM Subsystem Shutdown event: Not Supported 00:40:13.443 Zone Descriptor Change Notices: Not Supported 00:40:13.443 Discovery Log Change Notices: Not Supported 00:40:13.443 Controller Attributes 00:40:13.443 128-bit Host Identifier: Not Supported 00:40:13.443 Non-Operational Permissive Mode: Not Supported 00:40:13.443 NVM Sets: Not Supported 00:40:13.443 Read Recovery Levels: Not Supported 00:40:13.443 Endurance Groups: Not Supported 00:40:13.443 Predictable Latency Mode: Not Supported 00:40:13.443 Traffic Based Keep ALive: Not Supported 00:40:13.443 Namespace Granularity: Not Supported 00:40:13.443 SQ Associations: Not Supported 00:40:13.443 UUID List: Not Supported 00:40:13.443 Multi-Domain Subsystem: Not Supported 00:40:13.443 Fixed Capacity Management: Not Supported 00:40:13.443 Variable Capacity Management: Not Supported 00:40:13.443 Delete Endurance Group: Not Supported 00:40:13.443 Delete NVM Set: Not Supported 00:40:13.443 Extended LBA Formats Supported: Supported 00:40:13.443 Flexible Data Placement Supported: Not Supported 00:40:13.443 00:40:13.443 Controller Memory Buffer Support 00:40:13.443 ================================ 00:40:13.443 Supported: No 00:40:13.443 00:40:13.443 Persistent Memory Region Support 00:40:13.443 ================================ 00:40:13.443 Supported: No 00:40:13.443 00:40:13.443 Admin Command Set Attributes 00:40:13.443 ============================ 00:40:13.443 Security Send/Receive: Not Supported 00:40:13.443 Format NVM: Supported 00:40:13.443 Firmware Activate/Download: Not Supported 00:40:13.443 Namespace Management: Supported 00:40:13.443 Device Self-Test: Not Supported 00:40:13.443 Directives: Supported 00:40:13.443 NVMe-MI: Not Supported 00:40:13.443 Virtualization Management: Not Supported 00:40:13.443 Doorbell Buffer Config: Supported 00:40:13.443 Get LBA Status Capability: Not Supported 00:40:13.443 Command & Feature Lockdown Capability: Not Supported 00:40:13.443 Abort Command Limit: 4 00:40:13.443 Async Event Request Limit: 4 00:40:13.443 Number of Firmware Slots: N/A 00:40:13.443 Firmware Slot 1 Read-Only: N/A 00:40:13.443 Firmware Activation Without Reset: N/A 00:40:13.443 Multiple Update Detection Support: N/A 00:40:13.443 Firmware Update Granularity: No Information Provided 00:40:13.443 Per-Namespace SMART Log: Yes 00:40:13.443 Asymmetric Namespace Access Log Page: Not Supported 00:40:13.443 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:40:13.443 Command Effects Log Page: Supported 00:40:13.443 Get Log Page Extended Data: Supported 00:40:13.443 Telemetry Log Pages: Not Supported 00:40:13.443 Persistent Event Log Pages: Not Supported 00:40:13.443 Supported Log Pages Log Page: May Support 00:40:13.443 Commands Supported & Effects Log Page: Not Supported 00:40:13.443 Feature Identifiers & Effects Log Page:May Support 00:40:13.443 NVMe-MI Commands & Effects Log Page: May Support 00:40:13.443 Data Area 4 for Telemetry Log: Not Supported 00:40:13.443 Error Log Page Entries Supported: 1 00:40:13.443 Keep Alive: Not Supported 00:40:13.443 00:40:13.443 NVM Command Set Attributes 00:40:13.443 ========================== 00:40:13.443 Submission Queue Entry Size 00:40:13.443 Max: 64 00:40:13.443 Min: 64 00:40:13.443 Completion Queue Entry Size 00:40:13.443 Max: 16 00:40:13.443 Min: 16 00:40:13.443 Number of Namespaces: 256 00:40:13.443 Compare Command: Supported 00:40:13.443 Write Uncorrectable Command: Not Supported 00:40:13.443 Dataset Management Command: Supported 00:40:13.443 Write Zeroes Command: Supported 00:40:13.443 Set Features Save Field: Supported 00:40:13.443 Reservations: Not Supported 00:40:13.443 Timestamp: Supported 00:40:13.443 Copy: Supported 00:40:13.443 Volatile Write Cache: Present 00:40:13.444 Atomic Write Unit (Normal): 1 00:40:13.444 Atomic Write Unit (PFail): 1 00:40:13.444 Atomic Compare & Write Unit: 1 00:40:13.444 Fused Compare & Write: Not Supported 00:40:13.444 Scatter-Gather List 00:40:13.444 SGL Command Set: Supported 00:40:13.444 SGL Keyed: Not Supported 00:40:13.444 SGL Bit Bucket Descriptor: Not Supported 00:40:13.444 SGL Metadata Pointer: Not Supported 00:40:13.444 Oversized SGL: Not Supported 00:40:13.444 SGL Metadata Address: Not Supported 00:40:13.444 SGL Offset: Not Supported 00:40:13.444 Transport SGL Data Block: Not Supported 00:40:13.444 Replay Protected Memory Block: Not Supported 00:40:13.444 00:40:13.444 Firmware Slot Information 00:40:13.444 ========================= 00:40:13.444 Active slot: 1 00:40:13.444 Slot 1 Firmware Revision: 1.0 00:40:13.444 00:40:13.444 00:40:13.444 Commands Supported and Effects 00:40:13.444 ============================== 00:40:13.444 Admin Commands 00:40:13.444 -------------- 00:40:13.444 Delete I/O Submission Queue (00h): Supported 00:40:13.444 Create I/O Submission Queue (01h): Supported 00:40:13.444 Get Log Page (02h): Supported 00:40:13.444 Delete I/O Completion Queue (04h): Supported 00:40:13.444 Create I/O Completion Queue (05h): Supported 00:40:13.444 Identify (06h): Supported 00:40:13.444 Abort (08h): Supported 00:40:13.444 Set Features (09h): Supported 00:40:13.444 Get Features (0Ah): Supported 00:40:13.444 Asynchronous Event Request (0Ch): Supported 00:40:13.444 Namespace Attachment (15h): Supported NS-Inventory-Change 00:40:13.444 Directive Send (19h): Supported 00:40:13.444 Directive Receive (1Ah): Supported 00:40:13.444 Virtualization Management (1Ch): Supported 00:40:13.444 Doorbell Buffer Config (7Ch): Supported 00:40:13.444 Format NVM (80h): Supported LBA-Change 00:40:13.444 I/O Commands 00:40:13.444 ------------ 00:40:13.444 Flush (00h): Supported LBA-Change 00:40:13.444 Write (01h): Supported LBA-Change 00:40:13.444 Read (02h): Supported 00:40:13.444 Compare (05h): Supported 00:40:13.444 Write Zeroes (08h): Supported LBA-Change 00:40:13.444 Dataset Management (09h): Supported LBA-Change 00:40:13.444 Unknown (0Ch): Supported 00:40:13.444 Unknown (12h): Supported 00:40:13.444 Copy (19h): Supported LBA-Change 00:40:13.444 Unknown (1Dh): Supported LBA-Change 00:40:13.444 00:40:13.444 Error Log 00:40:13.444 ========= 00:40:13.444 00:40:13.444 Arbitration 00:40:13.444 =========== 00:40:13.444 Arbitration Burst: no limit 00:40:13.444 00:40:13.444 Power Management 00:40:13.444 ================ 00:40:13.444 Number of Power States: 1 00:40:13.444 Current Power State: Power State #0 00:40:13.444 Power State #0: 00:40:13.444 Max Power: 25.00 W 00:40:13.444 Non-Operational State: Operational 00:40:13.444 Entry Latency: 16 microseconds 00:40:13.444 Exit Latency: 4 microseconds 00:40:13.444 Relative Read Throughput: 0 00:40:13.444 Relative Read Latency: 0 00:40:13.444 Relative Write Throughput: 0 00:40:13.444 Relative Write Latency: 0 00:40:13.444 Idle Power: Not Reported 00:40:13.444 Active Power: Not Reported 00:40:13.444 Non-Operational Permissive Mode: Not Supported 00:40:13.444 00:40:13.444 Health Information 00:40:13.444 ================== 00:40:13.444 Critical Warnings: 00:40:13.444 Available Spare Space: OK 00:40:13.444 Temperature: OK 00:40:13.444 Device Reliability: OK 00:40:13.444 Read Only: No 00:40:13.444 Volatile Memory Backup: OK 00:40:13.444 Current Temperature: 323 Kelvin (50 Celsius) 00:40:13.444 Temperature Threshold: 343 Kelvin (70 Celsius) 00:40:13.444 Available Spare: 0% 00:40:13.444 Available Spare Threshold: 0% 00:40:13.444 Life Percentage Used: 0% 00:40:13.444 Data Units Read: 4856 00:40:13.444 Data Units Written: 4524 00:40:13.444 Host Read Commands: 199557 00:40:13.444 Host Write Commands: 212676 00:40:13.444 Controller Busy Time: 0 minutes 00:40:13.444 Power Cycles: 0 00:40:13.444 Power On Hours: 0 hours 00:40:13.444 Unsafe Shutdowns: 0 00:40:13.444 Unrecoverable Media Errors: 0 00:40:13.444 Lifetime Error Log Entries: 0 00:40:13.444 Warning Temperature Time: 0 minutes 00:40:13.444 Critical Temperature Time: 0 minutes 00:40:13.444 00:40:13.444 Number of Queues 00:40:13.444 ================ 00:40:13.444 Number of I/O Submission Queues: 64 00:40:13.444 Number of I/O Completion Queues: 64 00:40:13.444 00:40:13.444 ZNS Specific Controller Data 00:40:13.444 ============================ 00:40:13.444 Zone Append Size Limit: 0 00:40:13.444 00:40:13.444 00:40:13.444 Active Namespaces 00:40:13.444 ================= 00:40:13.444 Namespace ID:1 00:40:13.444 Error Recovery Timeout: Unlimited 00:40:13.444 Command Set Identifier: NVM (00h) 00:40:13.444 Deallocate: Supported 00:40:13.444 Deallocated/Unwritten Error: Supported 00:40:13.444 Deallocated Read Value: All 0x00 00:40:13.444 Deallocate in Write Zeroes: Not Supported 00:40:13.444 Deallocated Guard Field: 0xFFFF 00:40:13.444 Flush: Supported 00:40:13.444 Reservation: Not Supported 00:40:13.444 Namespace Sharing Capabilities: Private 00:40:13.444 Size (in LBAs): 1310720 (5GiB) 00:40:13.444 Capacity (in LBAs): 1310720 (5GiB) 00:40:13.444 Utilization (in LBAs): 1310720 (5GiB) 00:40:13.444 Thin Provisioning: Not Supported 00:40:13.444 Per-NS Atomic Units: No 00:40:13.444 Maximum Single Source Range Length: 128 00:40:13.444 Maximum Copy Length: 128 00:40:13.444 Maximum Source Range Count: 128 00:40:13.444 NGUID/EUI64 Never Reused: No 00:40:13.444 Namespace Write Protected: No 00:40:13.444 Number of LBA Formats: 8 00:40:13.444 Current LBA Format: LBA Format #04 00:40:13.444 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:13.444 LBA Format #01: Data Size: 512 Metadata Size: 8 00:40:13.444 LBA Format #02: Data Size: 512 Metadata Size: 16 00:40:13.444 LBA Format #03: Data Size: 512 Metadata Size: 64 00:40:13.444 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:40:13.444 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:40:13.444 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:40:13.444 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:40:13.444 00:40:13.444 12:22:12 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:40:13.444 12:22:12 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:40:13.703 ===================================================== 00:40:13.703 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:13.703 ===================================================== 00:40:13.703 Controller Capabilities/Features 00:40:13.703 ================================ 00:40:13.703 Vendor ID: 1b36 00:40:13.703 Subsystem Vendor ID: 1af4 00:40:13.703 Serial Number: 12340 00:40:13.703 Model Number: QEMU NVMe Ctrl 00:40:13.703 Firmware Version: 8.0.0 00:40:13.703 Recommended Arb Burst: 6 00:40:13.703 IEEE OUI Identifier: 00 54 52 00:40:13.703 Multi-path I/O 00:40:13.703 May have multiple subsystem ports: No 00:40:13.703 May have multiple controllers: No 00:40:13.703 Associated with SR-IOV VF: No 00:40:13.703 Max Data Transfer Size: 524288 00:40:13.703 Max Number of Namespaces: 256 00:40:13.703 Max Number of I/O Queues: 64 00:40:13.703 NVMe Specification Version (VS): 1.4 00:40:13.703 NVMe Specification Version (Identify): 1.4 00:40:13.703 Maximum Queue Entries: 2048 00:40:13.703 Contiguous Queues Required: Yes 00:40:13.703 Arbitration Mechanisms Supported 00:40:13.703 Weighted Round Robin: Not Supported 00:40:13.703 Vendor Specific: Not Supported 00:40:13.703 Reset Timeout: 7500 ms 00:40:13.703 Doorbell Stride: 4 bytes 00:40:13.703 NVM Subsystem Reset: Not Supported 00:40:13.703 Command Sets Supported 00:40:13.703 NVM Command Set: Supported 00:40:13.703 Boot Partition: Not Supported 00:40:13.703 Memory Page Size Minimum: 4096 bytes 00:40:13.703 Memory Page Size Maximum: 65536 bytes 00:40:13.703 Persistent Memory Region: Not Supported 00:40:13.703 Optional Asynchronous Events Supported 00:40:13.703 Namespace Attribute Notices: Supported 00:40:13.703 Firmware Activation Notices: Not Supported 00:40:13.703 ANA Change Notices: Not Supported 00:40:13.703 PLE Aggregate Log Change Notices: Not Supported 00:40:13.703 LBA Status Info Alert Notices: Not Supported 00:40:13.703 EGE Aggregate Log Change Notices: Not Supported 00:40:13.703 Normal NVM Subsystem Shutdown event: Not Supported 00:40:13.703 Zone Descriptor Change Notices: Not Supported 00:40:13.703 Discovery Log Change Notices: Not Supported 00:40:13.703 Controller Attributes 00:40:13.703 128-bit Host Identifier: Not Supported 00:40:13.703 Non-Operational Permissive Mode: Not Supported 00:40:13.703 NVM Sets: Not Supported 00:40:13.703 Read Recovery Levels: Not Supported 00:40:13.703 Endurance Groups: Not Supported 00:40:13.703 Predictable Latency Mode: Not Supported 00:40:13.703 Traffic Based Keep ALive: Not Supported 00:40:13.703 Namespace Granularity: Not Supported 00:40:13.703 SQ Associations: Not Supported 00:40:13.703 UUID List: Not Supported 00:40:13.703 Multi-Domain Subsystem: Not Supported 00:40:13.703 Fixed Capacity Management: Not Supported 00:40:13.703 Variable Capacity Management: Not Supported 00:40:13.703 Delete Endurance Group: Not Supported 00:40:13.703 Delete NVM Set: Not Supported 00:40:13.703 Extended LBA Formats Supported: Supported 00:40:13.703 Flexible Data Placement Supported: Not Supported 00:40:13.703 00:40:13.703 Controller Memory Buffer Support 00:40:13.703 ================================ 00:40:13.703 Supported: No 00:40:13.703 00:40:13.703 Persistent Memory Region Support 00:40:13.703 ================================ 00:40:13.703 Supported: No 00:40:13.703 00:40:13.703 Admin Command Set Attributes 00:40:13.703 ============================ 00:40:13.703 Security Send/Receive: Not Supported 00:40:13.703 Format NVM: Supported 00:40:13.703 Firmware Activate/Download: Not Supported 00:40:13.703 Namespace Management: Supported 00:40:13.703 Device Self-Test: Not Supported 00:40:13.703 Directives: Supported 00:40:13.703 NVMe-MI: Not Supported 00:40:13.703 Virtualization Management: Not Supported 00:40:13.703 Doorbell Buffer Config: Supported 00:40:13.703 Get LBA Status Capability: Not Supported 00:40:13.703 Command & Feature Lockdown Capability: Not Supported 00:40:13.703 Abort Command Limit: 4 00:40:13.703 Async Event Request Limit: 4 00:40:13.703 Number of Firmware Slots: N/A 00:40:13.703 Firmware Slot 1 Read-Only: N/A 00:40:13.703 Firmware Activation Without Reset: N/A 00:40:13.703 Multiple Update Detection Support: N/A 00:40:13.703 Firmware Update Granularity: No Information Provided 00:40:13.703 Per-Namespace SMART Log: Yes 00:40:13.703 Asymmetric Namespace Access Log Page: Not Supported 00:40:13.703 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:40:13.703 Command Effects Log Page: Supported 00:40:13.703 Get Log Page Extended Data: Supported 00:40:13.703 Telemetry Log Pages: Not Supported 00:40:13.703 Persistent Event Log Pages: Not Supported 00:40:13.703 Supported Log Pages Log Page: May Support 00:40:13.703 Commands Supported & Effects Log Page: Not Supported 00:40:13.703 Feature Identifiers & Effects Log Page:May Support 00:40:13.703 NVMe-MI Commands & Effects Log Page: May Support 00:40:13.703 Data Area 4 for Telemetry Log: Not Supported 00:40:13.703 Error Log Page Entries Supported: 1 00:40:13.703 Keep Alive: Not Supported 00:40:13.703 00:40:13.703 NVM Command Set Attributes 00:40:13.703 ========================== 00:40:13.703 Submission Queue Entry Size 00:40:13.703 Max: 64 00:40:13.703 Min: 64 00:40:13.703 Completion Queue Entry Size 00:40:13.703 Max: 16 00:40:13.703 Min: 16 00:40:13.703 Number of Namespaces: 256 00:40:13.703 Compare Command: Supported 00:40:13.703 Write Uncorrectable Command: Not Supported 00:40:13.704 Dataset Management Command: Supported 00:40:13.704 Write Zeroes Command: Supported 00:40:13.704 Set Features Save Field: Supported 00:40:13.704 Reservations: Not Supported 00:40:13.704 Timestamp: Supported 00:40:13.704 Copy: Supported 00:40:13.704 Volatile Write Cache: Present 00:40:13.704 Atomic Write Unit (Normal): 1 00:40:13.704 Atomic Write Unit (PFail): 1 00:40:13.704 Atomic Compare & Write Unit: 1 00:40:13.704 Fused Compare & Write: Not Supported 00:40:13.704 Scatter-Gather List 00:40:13.704 SGL Command Set: Supported 00:40:13.704 SGL Keyed: Not Supported 00:40:13.704 SGL Bit Bucket Descriptor: Not Supported 00:40:13.704 SGL Metadata Pointer: Not Supported 00:40:13.704 Oversized SGL: Not Supported 00:40:13.704 SGL Metadata Address: Not Supported 00:40:13.704 SGL Offset: Not Supported 00:40:13.704 Transport SGL Data Block: Not Supported 00:40:13.704 Replay Protected Memory Block: Not Supported 00:40:13.704 00:40:13.704 Firmware Slot Information 00:40:13.704 ========================= 00:40:13.704 Active slot: 1 00:40:13.704 Slot 1 Firmware Revision: 1.0 00:40:13.704 00:40:13.704 00:40:13.704 Commands Supported and Effects 00:40:13.704 ============================== 00:40:13.704 Admin Commands 00:40:13.704 -------------- 00:40:13.704 Delete I/O Submission Queue (00h): Supported 00:40:13.704 Create I/O Submission Queue (01h): Supported 00:40:13.704 Get Log Page (02h): Supported 00:40:13.704 Delete I/O Completion Queue (04h): Supported 00:40:13.704 Create I/O Completion Queue (05h): Supported 00:40:13.704 Identify (06h): Supported 00:40:13.704 Abort (08h): Supported 00:40:13.704 Set Features (09h): Supported 00:40:13.704 Get Features (0Ah): Supported 00:40:13.704 Asynchronous Event Request (0Ch): Supported 00:40:13.704 Namespace Attachment (15h): Supported NS-Inventory-Change 00:40:13.704 Directive Send (19h): Supported 00:40:13.704 Directive Receive (1Ah): Supported 00:40:13.704 Virtualization Management (1Ch): Supported 00:40:13.704 Doorbell Buffer Config (7Ch): Supported 00:40:13.704 Format NVM (80h): Supported LBA-Change 00:40:13.704 I/O Commands 00:40:13.704 ------------ 00:40:13.704 Flush (00h): Supported LBA-Change 00:40:13.704 Write (01h): Supported LBA-Change 00:40:13.704 Read (02h): Supported 00:40:13.704 Compare (05h): Supported 00:40:13.704 Write Zeroes (08h): Supported LBA-Change 00:40:13.704 Dataset Management (09h): Supported LBA-Change 00:40:13.704 Unknown (0Ch): Supported 00:40:13.704 Unknown (12h): Supported 00:40:13.704 Copy (19h): Supported LBA-Change 00:40:13.704 Unknown (1Dh): Supported LBA-Change 00:40:13.704 00:40:13.704 Error Log 00:40:13.704 ========= 00:40:13.704 00:40:13.704 Arbitration 00:40:13.704 =========== 00:40:13.704 Arbitration Burst: no limit 00:40:13.704 00:40:13.704 Power Management 00:40:13.704 ================ 00:40:13.704 Number of Power States: 1 00:40:13.704 Current Power State: Power State #0 00:40:13.704 Power State #0: 00:40:13.704 Max Power: 25.00 W 00:40:13.704 Non-Operational State: Operational 00:40:13.704 Entry Latency: 16 microseconds 00:40:13.704 Exit Latency: 4 microseconds 00:40:13.704 Relative Read Throughput: 0 00:40:13.704 Relative Read Latency: 0 00:40:13.704 Relative Write Throughput: 0 00:40:13.704 Relative Write Latency: 0 00:40:13.704 Idle Power: Not Reported 00:40:13.704 Active Power: Not Reported 00:40:13.704 Non-Operational Permissive Mode: Not Supported 00:40:13.704 00:40:13.704 Health Information 00:40:13.704 ================== 00:40:13.704 Critical Warnings: 00:40:13.704 Available Spare Space: OK 00:40:13.704 Temperature: OK 00:40:13.704 Device Reliability: OK 00:40:13.704 Read Only: No 00:40:13.704 Volatile Memory Backup: OK 00:40:13.704 Current Temperature: 323 Kelvin (50 Celsius) 00:40:13.704 Temperature Threshold: 343 Kelvin (70 Celsius) 00:40:13.704 Available Spare: 0% 00:40:13.704 Available Spare Threshold: 0% 00:40:13.704 Life Percentage Used: 0% 00:40:13.704 Data Units Read: 4856 00:40:13.704 Data Units Written: 4524 00:40:13.704 Host Read Commands: 199557 00:40:13.704 Host Write Commands: 212676 00:40:13.704 Controller Busy Time: 0 minutes 00:40:13.704 Power Cycles: 0 00:40:13.704 Power On Hours: 0 hours 00:40:13.704 Unsafe Shutdowns: 0 00:40:13.704 Unrecoverable Media Errors: 0 00:40:13.704 Lifetime Error Log Entries: 0 00:40:13.704 Warning Temperature Time: 0 minutes 00:40:13.704 Critical Temperature Time: 0 minutes 00:40:13.704 00:40:13.704 Number of Queues 00:40:13.704 ================ 00:40:13.704 Number of I/O Submission Queues: 64 00:40:13.704 Number of I/O Completion Queues: 64 00:40:13.704 00:40:13.704 ZNS Specific Controller Data 00:40:13.704 ============================ 00:40:13.704 Zone Append Size Limit: 0 00:40:13.704 00:40:13.704 00:40:13.704 Active Namespaces 00:40:13.704 ================= 00:40:13.704 Namespace ID:1 00:40:13.704 Error Recovery Timeout: Unlimited 00:40:13.704 Command Set Identifier: NVM (00h) 00:40:13.704 Deallocate: Supported 00:40:13.704 Deallocated/Unwritten Error: Supported 00:40:13.704 Deallocated Read Value: All 0x00 00:40:13.704 Deallocate in Write Zeroes: Not Supported 00:40:13.704 Deallocated Guard Field: 0xFFFF 00:40:13.704 Flush: Supported 00:40:13.704 Reservation: Not Supported 00:40:13.704 Namespace Sharing Capabilities: Private 00:40:13.704 Size (in LBAs): 1310720 (5GiB) 00:40:13.704 Capacity (in LBAs): 1310720 (5GiB) 00:40:13.704 Utilization (in LBAs): 1310720 (5GiB) 00:40:13.704 Thin Provisioning: Not Supported 00:40:13.704 Per-NS Atomic Units: No 00:40:13.704 Maximum Single Source Range Length: 128 00:40:13.704 Maximum Copy Length: 128 00:40:13.704 Maximum Source Range Count: 128 00:40:13.704 NGUID/EUI64 Never Reused: No 00:40:13.704 Namespace Write Protected: No 00:40:13.704 Number of LBA Formats: 8 00:40:13.704 Current LBA Format: LBA Format #04 00:40:13.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:40:13.704 LBA Format #01: Data Size: 512 Metadata Size: 8 00:40:13.704 LBA Format #02: Data Size: 512 Metadata Size: 16 00:40:13.704 LBA Format #03: Data Size: 512 Metadata Size: 64 00:40:13.704 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:40:13.704 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:40:13.704 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:40:13.704 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:40:13.704 00:40:13.704 00:40:13.704 real 0m0.601s 00:40:13.704 user 0m0.229s 00:40:13.704 sys 0m0.271s 00:40:13.704 12:22:12 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:13.704 12:22:12 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:40:13.704 ************************************ 00:40:13.704 END TEST nvme_identify 00:40:13.704 ************************************ 00:40:13.704 12:22:12 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:40:13.704 12:22:12 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:13.704 12:22:12 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:13.704 12:22:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:13.704 ************************************ 00:40:13.704 START TEST nvme_perf 00:40:13.704 ************************************ 00:40:13.704 12:22:12 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:40:13.704 12:22:12 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:40:15.085 Initializing NVMe Controllers 00:40:15.085 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:15.085 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:15.085 Initialization complete. Launching workers. 00:40:15.085 ======================================================== 00:40:15.085 Latency(us) 00:40:15.085 Device Information : IOPS MiB/s Average min max 00:40:15.085 PCIE (0000:00:10.0) NSID 1 from core 0: 75564.73 885.52 1693.06 720.60 19435.17 00:40:15.085 ======================================================== 00:40:15.085 Total : 75564.73 885.52 1693.06 720.60 19435.17 00:40:15.085 00:40:15.085 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:15.085 ================================================================================= 00:40:15.085 1.00000% : 901.120us 00:40:15.085 10.00000% : 1124.538us 00:40:15.085 25.00000% : 1333.062us 00:40:15.085 50.00000% : 1630.953us 00:40:15.085 75.00000% : 1936.291us 00:40:15.085 90.00000% : 2204.393us 00:40:15.085 95.00000% : 2457.600us 00:40:15.085 98.00000% : 2949.120us 00:40:15.085 99.00000% : 3261.905us 00:40:15.085 99.50000% : 3813.004us 00:40:15.085 99.90000% : 18111.767us 00:40:15.085 99.99000% : 19303.331us 00:40:15.085 99.99900% : 19541.644us 00:40:15.085 99.99990% : 19541.644us 00:40:15.085 99.99999% : 19541.644us 00:40:15.085 00:40:15.085 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:15.085 ============================================================================== 00:40:15.085 Range in us Cumulative IO count 00:40:15.085 718.662 - 722.385: 0.0013% ( 1) 00:40:15.085 726.109 - 729.833: 0.0040% ( 2) 00:40:15.085 729.833 - 733.556: 0.0066% ( 2) 00:40:15.085 733.556 - 737.280: 0.0093% ( 2) 00:40:15.085 741.004 - 744.727: 0.0119% ( 2) 00:40:15.085 748.451 - 752.175: 0.0172% ( 4) 00:40:15.085 755.898 - 759.622: 0.0185% ( 1) 00:40:15.085 763.345 - 767.069: 0.0212% ( 2) 00:40:15.085 767.069 - 770.793: 0.0225% ( 1) 00:40:15.085 774.516 - 778.240: 0.0251% ( 2) 00:40:15.085 778.240 - 781.964: 0.0291% ( 3) 00:40:15.085 781.964 - 785.687: 0.0304% ( 1) 00:40:15.085 789.411 - 793.135: 0.0318% ( 1) 00:40:15.085 793.135 - 796.858: 0.0344% ( 2) 00:40:15.085 796.858 - 800.582: 0.0397% ( 4) 00:40:15.085 800.582 - 804.305: 0.0450% ( 4) 00:40:15.085 804.305 - 808.029: 0.0556% ( 8) 00:40:15.085 808.029 - 811.753: 0.0675% ( 9) 00:40:15.085 811.753 - 815.476: 0.0767% ( 7) 00:40:15.085 815.476 - 819.200: 0.0873% ( 8) 00:40:15.085 819.200 - 822.924: 0.0926% ( 4) 00:40:15.085 822.924 - 826.647: 0.1058% ( 10) 00:40:15.085 826.647 - 830.371: 0.1244% ( 14) 00:40:15.085 830.371 - 834.095: 0.1469% ( 17) 00:40:15.085 834.095 - 837.818: 0.1693% ( 17) 00:40:15.085 837.818 - 841.542: 0.1945% ( 19) 00:40:15.085 841.542 - 845.265: 0.2381% ( 33) 00:40:15.085 845.265 - 848.989: 0.2805% ( 32) 00:40:15.085 848.989 - 852.713: 0.3030% ( 17) 00:40:15.085 852.713 - 856.436: 0.3374% ( 26) 00:40:15.085 856.436 - 860.160: 0.3652% ( 21) 00:40:15.085 860.160 - 863.884: 0.4234% ( 44) 00:40:15.085 863.884 - 867.607: 0.4789% ( 42) 00:40:15.085 867.607 - 871.331: 0.5120% ( 25) 00:40:15.085 871.331 - 875.055: 0.5504% ( 29) 00:40:15.085 875.055 - 878.778: 0.6258% ( 57) 00:40:15.085 878.778 - 882.502: 0.6853% ( 45) 00:40:15.085 882.502 - 886.225: 0.7475% ( 47) 00:40:15.085 886.225 - 889.949: 0.7991% ( 39) 00:40:15.085 889.949 - 893.673: 0.8692% ( 53) 00:40:15.085 893.673 - 897.396: 0.9367% ( 51) 00:40:15.085 897.396 - 901.120: 1.0201% ( 63) 00:40:15.085 901.120 - 904.844: 1.1113% ( 69) 00:40:15.085 904.844 - 908.567: 1.1841% ( 55) 00:40:15.085 908.567 - 912.291: 1.2608% ( 58) 00:40:15.085 912.291 - 916.015: 1.3455% ( 64) 00:40:15.085 916.015 - 919.738: 1.4408% ( 72) 00:40:15.085 919.738 - 923.462: 1.5241% ( 63) 00:40:15.085 923.462 - 927.185: 1.6287% ( 79) 00:40:15.085 927.185 - 930.909: 1.7358% ( 81) 00:40:15.085 930.909 - 934.633: 1.8456% ( 83) 00:40:15.085 934.633 - 938.356: 1.9369% ( 69) 00:40:15.085 938.356 - 942.080: 2.0428% ( 80) 00:40:15.085 942.080 - 945.804: 2.1671% ( 94) 00:40:15.085 945.804 - 949.527: 2.2716% ( 79) 00:40:15.085 949.527 - 953.251: 2.3854% ( 86) 00:40:15.085 953.251 - 960.698: 2.6130% ( 172) 00:40:15.085 960.698 - 968.145: 2.8855% ( 206) 00:40:15.085 968.145 - 975.593: 3.1263% ( 182) 00:40:15.085 975.593 - 983.040: 3.4015% ( 208) 00:40:15.085 983.040 - 990.487: 3.6741% ( 206) 00:40:15.085 990.487 - 997.935: 3.9545% ( 212) 00:40:15.086 997.935 - 1005.382: 4.2403% ( 216) 00:40:15.086 1005.382 - 1012.829: 4.5420% ( 228) 00:40:15.086 1012.829 - 1020.276: 4.8396% ( 225) 00:40:15.086 1020.276 - 1027.724: 5.1387% ( 226) 00:40:15.086 1027.724 - 1035.171: 5.4813% ( 259) 00:40:15.086 1035.171 - 1042.618: 5.8002% ( 241) 00:40:15.086 1042.618 - 1050.065: 6.1455% ( 261) 00:40:15.086 1050.065 - 1057.513: 6.4948% ( 264) 00:40:15.086 1057.513 - 1064.960: 6.8705% ( 284) 00:40:15.086 1064.960 - 1072.407: 7.2251% ( 268) 00:40:15.086 1072.407 - 1079.855: 7.6260% ( 303) 00:40:15.086 1079.855 - 1087.302: 8.0083% ( 289) 00:40:15.086 1087.302 - 1094.749: 8.4039% ( 299) 00:40:15.086 1094.749 - 1102.196: 8.8061% ( 304) 00:40:15.086 1102.196 - 1109.644: 9.2625% ( 345) 00:40:15.086 1109.644 - 1117.091: 9.7005% ( 331) 00:40:15.086 1117.091 - 1124.538: 10.1318% ( 326) 00:40:15.086 1124.538 - 1131.985: 10.5737% ( 334) 00:40:15.086 1131.985 - 1139.433: 11.0209% ( 338) 00:40:15.086 1139.433 - 1146.880: 11.4919% ( 356) 00:40:15.086 1146.880 - 1154.327: 11.9615% ( 355) 00:40:15.086 1154.327 - 1161.775: 12.4391% ( 361) 00:40:15.086 1161.775 - 1169.222: 12.9260% ( 368) 00:40:15.086 1169.222 - 1176.669: 13.4499% ( 396) 00:40:15.086 1176.669 - 1184.116: 13.9183% ( 354) 00:40:15.086 1184.116 - 1191.564: 14.4237% ( 382) 00:40:15.086 1191.564 - 1199.011: 14.9516% ( 399) 00:40:15.086 1199.011 - 1206.458: 15.4583% ( 383) 00:40:15.086 1206.458 - 1213.905: 15.9902% ( 402) 00:40:15.086 1213.905 - 1221.353: 16.5498% ( 423) 00:40:15.086 1221.353 - 1228.800: 17.0896% ( 408) 00:40:15.086 1228.800 - 1236.247: 17.6506% ( 424) 00:40:15.086 1236.247 - 1243.695: 18.2314% ( 439) 00:40:15.086 1243.695 - 1251.142: 18.7870% ( 420) 00:40:15.086 1251.142 - 1258.589: 19.3903% ( 456) 00:40:15.086 1258.589 - 1266.036: 19.9315% ( 409) 00:40:15.086 1266.036 - 1273.484: 20.5348% ( 456) 00:40:15.086 1273.484 - 1280.931: 21.1023% ( 429) 00:40:15.086 1280.931 - 1288.378: 21.7281% ( 473) 00:40:15.086 1288.378 - 1295.825: 22.2851% ( 421) 00:40:15.086 1295.825 - 1303.273: 22.9083% ( 471) 00:40:15.086 1303.273 - 1310.720: 23.5076% ( 453) 00:40:15.086 1310.720 - 1318.167: 24.1215% ( 464) 00:40:15.086 1318.167 - 1325.615: 24.7486% ( 474) 00:40:15.086 1325.615 - 1333.062: 25.3757% ( 474) 00:40:15.086 1333.062 - 1340.509: 26.0161% ( 484) 00:40:15.086 1340.509 - 1347.956: 26.6247% ( 460) 00:40:15.086 1347.956 - 1355.404: 27.2346% ( 461) 00:40:15.086 1355.404 - 1362.851: 27.8551% ( 469) 00:40:15.086 1362.851 - 1370.298: 28.4994% ( 487) 00:40:15.086 1370.298 - 1377.745: 29.1093% ( 461) 00:40:15.086 1377.745 - 1385.193: 29.7285% ( 468) 00:40:15.086 1385.193 - 1392.640: 30.3424% ( 464) 00:40:15.086 1392.640 - 1400.087: 30.9775% ( 480) 00:40:15.086 1400.087 - 1407.535: 31.6350% ( 497) 00:40:15.086 1407.535 - 1414.982: 32.2515% ( 466) 00:40:15.086 1414.982 - 1422.429: 32.9236% ( 508) 00:40:15.086 1422.429 - 1429.876: 33.5587% ( 480) 00:40:15.086 1429.876 - 1437.324: 34.1898% ( 477) 00:40:15.086 1437.324 - 1444.771: 34.8473% ( 497) 00:40:15.086 1444.771 - 1452.218: 35.4797% ( 478) 00:40:15.086 1452.218 - 1459.665: 36.1135% ( 479) 00:40:15.086 1459.665 - 1467.113: 36.7816% ( 505) 00:40:15.086 1467.113 - 1474.560: 37.4233% ( 485) 00:40:15.086 1474.560 - 1482.007: 38.0424% ( 468) 00:40:15.086 1482.007 - 1489.455: 38.6669% ( 472) 00:40:15.086 1489.455 - 1496.902: 39.3099% ( 486) 00:40:15.086 1496.902 - 1504.349: 39.9476% ( 482) 00:40:15.086 1504.349 - 1511.796: 40.5734% ( 473) 00:40:15.086 1511.796 - 1519.244: 41.2005% ( 474) 00:40:15.086 1519.244 - 1526.691: 41.8528% ( 493) 00:40:15.086 1526.691 - 1534.138: 42.4772% ( 472) 00:40:15.086 1534.138 - 1541.585: 43.1004% ( 471) 00:40:15.086 1541.585 - 1549.033: 43.7222% ( 470) 00:40:15.086 1549.033 - 1556.480: 44.3692% ( 489) 00:40:15.086 1556.480 - 1563.927: 44.9870% ( 467) 00:40:15.086 1563.927 - 1571.375: 45.6022% ( 465) 00:40:15.086 1571.375 - 1578.822: 46.2373% ( 480) 00:40:15.086 1578.822 - 1586.269: 46.8657% ( 475) 00:40:15.086 1586.269 - 1593.716: 47.4677% ( 455) 00:40:15.086 1593.716 - 1601.164: 48.1266% ( 498) 00:40:15.086 1601.164 - 1608.611: 48.7114% ( 442) 00:40:15.086 1608.611 - 1616.058: 49.3292% ( 467) 00:40:15.086 1616.058 - 1623.505: 49.9312% ( 455) 00:40:15.086 1623.505 - 1630.953: 50.5623% ( 477) 00:40:15.086 1630.953 - 1638.400: 51.2013% ( 483) 00:40:15.086 1638.400 - 1645.847: 51.7954% ( 449) 00:40:15.086 1645.847 - 1653.295: 52.4476% ( 493) 00:40:15.086 1653.295 - 1660.742: 53.0271% ( 438) 00:40:15.086 1660.742 - 1668.189: 53.6833% ( 496) 00:40:15.086 1668.189 - 1675.636: 54.3038% ( 469) 00:40:15.086 1675.636 - 1683.084: 54.8952% ( 447) 00:40:15.086 1683.084 - 1690.531: 55.5289% ( 479) 00:40:15.086 1690.531 - 1697.978: 56.1786% ( 491) 00:40:15.086 1697.978 - 1705.425: 56.8202% ( 485) 00:40:15.086 1705.425 - 1712.873: 57.4368% ( 466) 00:40:15.086 1712.873 - 1720.320: 58.0639% ( 474) 00:40:15.086 1720.320 - 1727.767: 58.6778% ( 464) 00:40:15.086 1727.767 - 1735.215: 59.3102% ( 478) 00:40:15.086 1735.215 - 1742.662: 59.8950% ( 442) 00:40:15.086 1742.662 - 1750.109: 60.5155% ( 469) 00:40:15.086 1750.109 - 1757.556: 61.1730% ( 497) 00:40:15.086 1757.556 - 1765.004: 61.7670% ( 449) 00:40:15.086 1765.004 - 1772.451: 62.3942% ( 474) 00:40:15.086 1772.451 - 1779.898: 63.0054% ( 462) 00:40:15.086 1779.898 - 1787.345: 63.6352% ( 476) 00:40:15.086 1787.345 - 1794.793: 64.2226% ( 444) 00:40:15.086 1794.793 - 1802.240: 64.8643% ( 485) 00:40:15.086 1802.240 - 1809.687: 65.4676% ( 456) 00:40:15.086 1809.687 - 1817.135: 66.0656% ( 452) 00:40:15.086 1817.135 - 1824.582: 66.6847% ( 468) 00:40:15.086 1824.582 - 1832.029: 67.2682% ( 441) 00:40:15.086 1832.029 - 1839.476: 67.8874% ( 468) 00:40:15.086 1839.476 - 1846.924: 68.5145% ( 474) 00:40:15.086 1846.924 - 1854.371: 69.1165% ( 455) 00:40:15.086 1854.371 - 1861.818: 69.6774% ( 424) 00:40:15.086 1861.818 - 1869.265: 70.2927% ( 465) 00:40:15.086 1869.265 - 1876.713: 70.8602% ( 429) 00:40:15.086 1876.713 - 1884.160: 71.4543% ( 449) 00:40:15.086 1884.160 - 1891.607: 72.0285% ( 434) 00:40:15.086 1891.607 - 1899.055: 72.6185% ( 446) 00:40:15.086 1899.055 - 1906.502: 73.2033% ( 442) 00:40:15.086 1906.502 - 1921.396: 74.3411% ( 860) 00:40:15.086 1921.396 - 1936.291: 75.4736% ( 856) 00:40:15.086 1936.291 - 1951.185: 76.5559% ( 818) 00:40:15.086 1951.185 - 1966.080: 77.6315% ( 813) 00:40:15.086 1966.080 - 1980.975: 78.6926% ( 802) 00:40:15.086 1980.975 - 1995.869: 79.7232% ( 779) 00:40:15.086 1995.869 - 2010.764: 80.7036% ( 741) 00:40:15.086 2010.764 - 2025.658: 81.6747% ( 734) 00:40:15.086 2025.658 - 2040.553: 82.5929% ( 694) 00:40:15.086 2040.553 - 2055.447: 83.4753% ( 667) 00:40:15.086 2055.447 - 2070.342: 84.3697% ( 676) 00:40:15.086 2070.342 - 2085.236: 85.1887% ( 619) 00:40:15.086 2085.236 - 2100.131: 85.9904% ( 606) 00:40:15.086 2100.131 - 2115.025: 86.7326% ( 561) 00:40:15.086 2115.025 - 2129.920: 87.4524% ( 544) 00:40:15.086 2129.920 - 2144.815: 88.1046% ( 493) 00:40:15.086 2144.815 - 2159.709: 88.7556% ( 492) 00:40:15.086 2159.709 - 2174.604: 89.3575% ( 455) 00:40:15.086 2174.604 - 2189.498: 89.9225% ( 427) 00:40:15.086 2189.498 - 2204.393: 90.4662% ( 411) 00:40:15.086 2204.393 - 2219.287: 90.9359% ( 355) 00:40:15.086 2219.287 - 2234.182: 91.3910% ( 344) 00:40:15.086 2234.182 - 2249.076: 91.8343% ( 335) 00:40:15.086 2249.076 - 2263.971: 92.2206% ( 292) 00:40:15.086 2263.971 - 2278.865: 92.5857% ( 276) 00:40:15.086 2278.865 - 2293.760: 92.9231% ( 255) 00:40:15.086 2293.760 - 2308.655: 93.2221% ( 226) 00:40:15.086 2308.655 - 2323.549: 93.4788% ( 194) 00:40:15.086 2323.549 - 2338.444: 93.7050% ( 171) 00:40:15.086 2338.444 - 2353.338: 93.9313% ( 171) 00:40:15.086 2353.338 - 2368.233: 94.1231% ( 145) 00:40:15.086 2368.233 - 2383.127: 94.3163% ( 146) 00:40:15.086 2383.127 - 2398.022: 94.4935% ( 134) 00:40:15.086 2398.022 - 2412.916: 94.6523% ( 120) 00:40:15.086 2412.916 - 2427.811: 94.7912% ( 105) 00:40:15.086 2427.811 - 2442.705: 94.9235% ( 100) 00:40:15.086 2442.705 - 2457.600: 95.0413% ( 89) 00:40:15.086 2457.600 - 2472.495: 95.1537% ( 85) 00:40:15.086 2472.495 - 2487.389: 95.2609% ( 81) 00:40:15.086 2487.389 - 2502.284: 95.3654% ( 79) 00:40:15.086 2502.284 - 2517.178: 95.4646% ( 75) 00:40:15.086 2517.178 - 2532.073: 95.5665% ( 77) 00:40:15.086 2532.073 - 2546.967: 95.6591% ( 70) 00:40:15.086 2546.967 - 2561.862: 95.7544% ( 72) 00:40:15.086 2561.862 - 2576.756: 95.8510% ( 73) 00:40:15.086 2576.756 - 2591.651: 95.9370% ( 65) 00:40:15.086 2591.651 - 2606.545: 96.0269% ( 68) 00:40:15.086 2606.545 - 2621.440: 96.1222% ( 72) 00:40:15.086 2621.440 - 2636.335: 96.2122% ( 68) 00:40:15.086 2636.335 - 2651.229: 96.3114% ( 75) 00:40:15.086 2651.229 - 2666.124: 96.4053% ( 71) 00:40:15.086 2666.124 - 2681.018: 96.5006% ( 72) 00:40:15.086 2681.018 - 2695.913: 96.5905% ( 68) 00:40:15.086 2695.913 - 2710.807: 96.6765% ( 65) 00:40:15.086 2710.807 - 2725.702: 96.7758% ( 75) 00:40:15.086 2725.702 - 2740.596: 96.8710% ( 72) 00:40:15.086 2740.596 - 2755.491: 96.9544% ( 63) 00:40:15.086 2755.491 - 2770.385: 97.0470% ( 70) 00:40:15.086 2770.385 - 2785.280: 97.1343% ( 66) 00:40:15.086 2785.280 - 2800.175: 97.2269% ( 70) 00:40:15.086 2800.175 - 2815.069: 97.3169% ( 68) 00:40:15.086 2815.069 - 2829.964: 97.4095% ( 70) 00:40:15.086 2829.964 - 2844.858: 97.4968% ( 66) 00:40:15.086 2844.858 - 2859.753: 97.5841% ( 66) 00:40:15.086 2859.753 - 2874.647: 97.6728% ( 67) 00:40:15.086 2874.647 - 2889.542: 97.7601% ( 66) 00:40:15.086 2889.542 - 2904.436: 97.8448% ( 64) 00:40:15.086 2904.436 - 2919.331: 97.9228% ( 59) 00:40:15.086 2919.331 - 2934.225: 97.9996% ( 58) 00:40:15.086 2934.225 - 2949.120: 98.0776% ( 59) 00:40:15.086 2949.120 - 2964.015: 98.1570% ( 60) 00:40:15.086 2964.015 - 2978.909: 98.2205% ( 48) 00:40:15.086 2978.909 - 2993.804: 98.2906% ( 53) 00:40:15.086 2993.804 - 3008.698: 98.3568% ( 50) 00:40:15.086 3008.698 - 3023.593: 98.4190% ( 47) 00:40:15.086 3023.593 - 3038.487: 98.4732% ( 41) 00:40:15.086 3038.487 - 3053.382: 98.5288% ( 42) 00:40:15.086 3053.382 - 3068.276: 98.5817% ( 40) 00:40:15.086 3068.276 - 3083.171: 98.6399% ( 44) 00:40:15.086 3083.171 - 3098.065: 98.6862% ( 35) 00:40:15.086 3098.065 - 3112.960: 98.7233% ( 28) 00:40:15.086 3112.960 - 3127.855: 98.7656% ( 32) 00:40:15.086 3127.855 - 3142.749: 98.8040% ( 29) 00:40:15.086 3142.749 - 3157.644: 98.8384% ( 26) 00:40:15.086 3157.644 - 3172.538: 98.8675% ( 22) 00:40:15.086 3172.538 - 3187.433: 98.8913% ( 18) 00:40:15.086 3187.433 - 3202.327: 98.9164% ( 19) 00:40:15.086 3202.327 - 3217.222: 98.9376% ( 16) 00:40:15.086 3217.222 - 3232.116: 98.9654% ( 21) 00:40:15.086 3232.116 - 3247.011: 98.9879% ( 17) 00:40:15.086 3247.011 - 3261.905: 99.0077% ( 15) 00:40:15.086 3261.905 - 3276.800: 99.0302% ( 17) 00:40:15.086 3276.800 - 3291.695: 99.0487% ( 14) 00:40:15.086 3291.695 - 3306.589: 99.0620% ( 10) 00:40:15.086 3306.589 - 3321.484: 99.0778% ( 12) 00:40:15.086 3321.484 - 3336.378: 99.0977% ( 15) 00:40:15.086 3336.378 - 3351.273: 99.1149% ( 13) 00:40:15.086 3351.273 - 3366.167: 99.1294% ( 11) 00:40:15.086 3366.167 - 3381.062: 99.1414% ( 9) 00:40:15.086 3381.062 - 3395.956: 99.1572% ( 12) 00:40:15.086 3395.956 - 3410.851: 99.1691% ( 9) 00:40:15.086 3410.851 - 3425.745: 99.1863% ( 13) 00:40:15.086 3425.745 - 3440.640: 99.2009% ( 11) 00:40:15.086 3440.640 - 3455.535: 99.2128% ( 9) 00:40:15.086 3455.535 - 3470.429: 99.2260% ( 10) 00:40:15.086 3470.429 - 3485.324: 99.2353% ( 7) 00:40:15.086 3485.324 - 3500.218: 99.2485% ( 10) 00:40:15.086 3500.218 - 3515.113: 99.2631% ( 11) 00:40:15.086 3515.113 - 3530.007: 99.2723% ( 7) 00:40:15.086 3530.007 - 3544.902: 99.2842% ( 9) 00:40:15.086 3544.902 - 3559.796: 99.2961% ( 9) 00:40:15.086 3559.796 - 3574.691: 99.3067% ( 8) 00:40:15.086 3574.691 - 3589.585: 99.3200% ( 10) 00:40:15.086 3589.585 - 3604.480: 99.3319% ( 9) 00:40:15.086 3604.480 - 3619.375: 99.3425% ( 8) 00:40:15.086 3619.375 - 3634.269: 99.3557% ( 10) 00:40:15.086 3634.269 - 3649.164: 99.3676% ( 9) 00:40:15.086 3649.164 - 3664.058: 99.3821% ( 11) 00:40:15.086 3664.058 - 3678.953: 99.3954% ( 10) 00:40:15.086 3678.953 - 3693.847: 99.4086% ( 10) 00:40:15.086 3693.847 - 3708.742: 99.4192% ( 8) 00:40:15.086 3708.742 - 3723.636: 99.4324% ( 10) 00:40:15.086 3723.636 - 3738.531: 99.4470% ( 11) 00:40:15.086 3738.531 - 3753.425: 99.4549% ( 6) 00:40:15.086 3753.425 - 3768.320: 99.4655% ( 8) 00:40:15.086 3768.320 - 3783.215: 99.4761% ( 8) 00:40:15.086 3783.215 - 3798.109: 99.4880% ( 9) 00:40:15.086 3798.109 - 3813.004: 99.5039% ( 12) 00:40:15.086 3813.004 - 3842.793: 99.5290% ( 19) 00:40:15.086 3842.793 - 3872.582: 99.5528% ( 18) 00:40:15.086 3872.582 - 3902.371: 99.5753% ( 17) 00:40:15.086 3902.371 - 3932.160: 99.5938% ( 14) 00:40:15.086 3932.160 - 3961.949: 99.6124% ( 14) 00:40:15.086 3961.949 - 3991.738: 99.6322% ( 15) 00:40:15.086 3991.738 - 4021.527: 99.6494% ( 13) 00:40:15.086 4021.527 - 4051.316: 99.6653% ( 12) 00:40:15.086 4051.316 - 4081.105: 99.6825% ( 13) 00:40:15.086 4081.105 - 4110.895: 99.6997% ( 13) 00:40:15.086 4110.895 - 4140.684: 99.7142% ( 11) 00:40:15.086 4140.684 - 4170.473: 99.7275% ( 10) 00:40:15.086 4170.473 - 4200.262: 99.7380% ( 8) 00:40:15.086 4200.262 - 4230.051: 99.7473% ( 7) 00:40:15.086 4230.051 - 4259.840: 99.7566% ( 7) 00:40:15.086 4259.840 - 4289.629: 99.7632% ( 5) 00:40:15.086 4289.629 - 4319.418: 99.7671% ( 3) 00:40:15.086 4319.418 - 4349.207: 99.7711% ( 3) 00:40:15.086 4349.207 - 4378.996: 99.7777% ( 5) 00:40:15.086 4378.996 - 4408.785: 99.7830% ( 4) 00:40:15.086 4408.785 - 4438.575: 99.7910% ( 6) 00:40:15.086 4438.575 - 4468.364: 99.7963% ( 4) 00:40:15.086 4468.364 - 4498.153: 99.8015% ( 4) 00:40:15.086 4498.153 - 4527.942: 99.8055% ( 3) 00:40:15.086 4527.942 - 4557.731: 99.8082% ( 2) 00:40:15.086 4557.731 - 4587.520: 99.8108% ( 2) 00:40:15.086 4587.520 - 4617.309: 99.8161% ( 4) 00:40:15.086 4617.309 - 4647.098: 99.8214% ( 4) 00:40:15.086 4647.098 - 4676.887: 99.8240% ( 2) 00:40:15.086 4676.887 - 4706.676: 99.8267% ( 2) 00:40:15.086 4706.676 - 4736.465: 99.8280% ( 1) 00:40:15.086 4736.465 - 4766.255: 99.8293% ( 1) 00:40:15.086 4766.255 - 4796.044: 99.8307% ( 1) 00:40:15.086 17158.516 - 17277.673: 99.8373% ( 5) 00:40:15.086 17277.673 - 17396.829: 99.8492% ( 9) 00:40:15.086 17396.829 - 17515.985: 99.8598% ( 8) 00:40:15.086 17515.985 - 17635.142: 99.8664% ( 5) 00:40:15.086 17635.142 - 17754.298: 99.8783% ( 9) 00:40:15.086 17754.298 - 17873.455: 99.8875% ( 7) 00:40:15.086 17873.455 - 17992.611: 99.8981% ( 8) 00:40:15.086 17992.611 - 18111.767: 99.9074% ( 7) 00:40:15.086 18111.767 - 18230.924: 99.9153% ( 6) 00:40:15.086 18230.924 - 18350.080: 99.9246% ( 7) 00:40:15.086 18350.080 - 18469.236: 99.9352% ( 8) 00:40:15.086 18469.236 - 18588.393: 99.9458% ( 8) 00:40:15.086 18588.393 - 18707.549: 99.9524% ( 5) 00:40:15.086 18707.549 - 18826.705: 99.9590% ( 5) 00:40:15.086 18826.705 - 18945.862: 99.9696% ( 8) 00:40:15.086 18945.862 - 19065.018: 99.9788% ( 7) 00:40:15.086 19065.018 - 19184.175: 99.9881% ( 7) 00:40:15.086 19184.175 - 19303.331: 99.9960% ( 6) 00:40:15.086 19303.331 - 19422.487: 99.9987% ( 2) 00:40:15.086 19422.487 - 19541.644: 100.0000% ( 1) 00:40:15.086 00:40:15.086 12:22:13 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:40:16.454 Initializing NVMe Controllers 00:40:16.454 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:16.454 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:16.454 Initialization complete. Launching workers. 00:40:16.454 ======================================================== 00:40:16.454 Latency(us) 00:40:16.454 Device Information : IOPS MiB/s Average min max 00:40:16.454 PCIE (0000:00:10.0) NSID 1 from core 0: 83108.18 973.92 1539.72 495.48 10714.86 00:40:16.454 ======================================================== 00:40:16.454 Total : 83108.18 973.92 1539.72 495.48 10714.86 00:40:16.454 00:40:16.454 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:16.454 ================================================================================= 00:40:16.454 1.00000% : 983.040us 00:40:16.454 10.00000% : 1199.011us 00:40:16.454 25.00000% : 1333.062us 00:40:16.454 50.00000% : 1496.902us 00:40:16.454 75.00000% : 1690.531us 00:40:16.454 90.00000% : 1921.396us 00:40:16.454 95.00000% : 2055.447us 00:40:16.454 98.00000% : 2234.182us 00:40:16.454 99.00000% : 2398.022us 00:40:16.454 99.50000% : 2710.807us 00:40:16.454 99.90000% : 9413.353us 00:40:16.454 99.99000% : 10426.182us 00:40:16.454 99.99900% : 10724.073us 00:40:16.454 99.99990% : 10724.073us 00:40:16.454 99.99999% : 10724.073us 00:40:16.454 00:40:16.454 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:40:16.454 ============================================================================== 00:40:16.454 Range in us Cumulative IO count 00:40:16.454 495.244 - 498.967: 0.0012% ( 1) 00:40:16.454 528.756 - 532.480: 0.0024% ( 1) 00:40:16.454 536.204 - 539.927: 0.0036% ( 1) 00:40:16.454 551.098 - 554.822: 0.0048% ( 1) 00:40:16.454 577.164 - 580.887: 0.0060% ( 1) 00:40:16.454 588.335 - 592.058: 0.0084% ( 2) 00:40:16.454 610.676 - 614.400: 0.0096% ( 1) 00:40:16.454 614.400 - 618.124: 0.0120% ( 2) 00:40:16.454 621.847 - 625.571: 0.0132% ( 1) 00:40:16.454 629.295 - 633.018: 0.0144% ( 1) 00:40:16.454 633.018 - 636.742: 0.0180% ( 3) 00:40:16.454 636.742 - 640.465: 0.0192% ( 1) 00:40:16.454 644.189 - 647.913: 0.0204% ( 1) 00:40:16.454 651.636 - 655.360: 0.0241% ( 3) 00:40:16.454 655.360 - 659.084: 0.0253% ( 1) 00:40:16.454 666.531 - 670.255: 0.0265% ( 1) 00:40:16.454 670.255 - 673.978: 0.0277% ( 1) 00:40:16.454 673.978 - 677.702: 0.0289% ( 1) 00:40:16.454 677.702 - 681.425: 0.0313% ( 2) 00:40:16.454 681.425 - 685.149: 0.0325% ( 1) 00:40:16.454 688.873 - 692.596: 0.0349% ( 2) 00:40:16.454 692.596 - 696.320: 0.0373% ( 2) 00:40:16.454 696.320 - 700.044: 0.0397% ( 2) 00:40:16.454 700.044 - 703.767: 0.0433% ( 3) 00:40:16.454 703.767 - 707.491: 0.0445% ( 1) 00:40:16.454 707.491 - 711.215: 0.0481% ( 3) 00:40:16.454 714.938 - 718.662: 0.0493% ( 1) 00:40:16.454 722.385 - 726.109: 0.0505% ( 1) 00:40:16.454 726.109 - 729.833: 0.0577% ( 6) 00:40:16.454 729.833 - 733.556: 0.0637% ( 5) 00:40:16.454 733.556 - 737.280: 0.0662% ( 2) 00:40:16.454 737.280 - 741.004: 0.0698% ( 3) 00:40:16.454 744.727 - 748.451: 0.0734% ( 3) 00:40:16.454 748.451 - 752.175: 0.0758% ( 2) 00:40:16.454 752.175 - 755.898: 0.0782% ( 2) 00:40:16.454 755.898 - 759.622: 0.0806% ( 2) 00:40:16.454 759.622 - 763.345: 0.0854% ( 4) 00:40:16.454 763.345 - 767.069: 0.0902% ( 4) 00:40:16.454 767.069 - 770.793: 0.0926% ( 2) 00:40:16.454 770.793 - 774.516: 0.0962% ( 3) 00:40:16.454 774.516 - 778.240: 0.0974% ( 1) 00:40:16.454 778.240 - 781.964: 0.1010% ( 3) 00:40:16.454 781.964 - 785.687: 0.1058% ( 4) 00:40:16.454 785.687 - 789.411: 0.1070% ( 1) 00:40:16.454 789.411 - 793.135: 0.1119% ( 4) 00:40:16.454 793.135 - 796.858: 0.1155% ( 3) 00:40:16.454 796.858 - 800.582: 0.1191% ( 3) 00:40:16.454 800.582 - 804.305: 0.1227% ( 3) 00:40:16.454 804.305 - 808.029: 0.1263% ( 3) 00:40:16.454 808.029 - 811.753: 0.1323% ( 5) 00:40:16.454 811.753 - 815.476: 0.1395% ( 6) 00:40:16.454 815.476 - 819.200: 0.1443% ( 4) 00:40:16.454 819.200 - 822.924: 0.1491% ( 4) 00:40:16.454 822.924 - 826.647: 0.1564% ( 6) 00:40:16.454 826.647 - 830.371: 0.1624% ( 5) 00:40:16.454 830.371 - 834.095: 0.1696% ( 6) 00:40:16.454 834.095 - 837.818: 0.1756% ( 5) 00:40:16.454 837.818 - 841.542: 0.1852% ( 8) 00:40:16.454 841.542 - 845.265: 0.1912% ( 5) 00:40:16.454 845.265 - 848.989: 0.1948% ( 3) 00:40:16.454 848.989 - 852.713: 0.2045% ( 8) 00:40:16.454 852.713 - 856.436: 0.2177% ( 11) 00:40:16.454 856.436 - 860.160: 0.2297% ( 10) 00:40:16.454 860.160 - 863.884: 0.2357% ( 5) 00:40:16.454 863.884 - 867.607: 0.2442% ( 7) 00:40:16.454 867.607 - 871.331: 0.2598% ( 13) 00:40:16.454 871.331 - 875.055: 0.2706% ( 9) 00:40:16.454 875.055 - 878.778: 0.2838% ( 11) 00:40:16.454 878.778 - 882.502: 0.2983% ( 12) 00:40:16.454 882.502 - 886.225: 0.3043% ( 5) 00:40:16.454 886.225 - 889.949: 0.3163% ( 10) 00:40:16.454 889.949 - 893.673: 0.3320% ( 13) 00:40:16.454 893.673 - 897.396: 0.3572% ( 21) 00:40:16.454 897.396 - 901.120: 0.3729% ( 13) 00:40:16.454 901.120 - 904.844: 0.3861% ( 11) 00:40:16.454 904.844 - 908.567: 0.4017% ( 13) 00:40:16.454 908.567 - 912.291: 0.4222% ( 17) 00:40:16.454 912.291 - 916.015: 0.4402% ( 15) 00:40:16.454 916.015 - 919.738: 0.4619% ( 18) 00:40:16.454 919.738 - 923.462: 0.4943% ( 27) 00:40:16.454 923.462 - 927.185: 0.5328% ( 32) 00:40:16.454 927.185 - 930.909: 0.5533% ( 17) 00:40:16.454 930.909 - 934.633: 0.5749% ( 18) 00:40:16.454 934.633 - 938.356: 0.6086% ( 28) 00:40:16.454 938.356 - 942.080: 0.6387% ( 25) 00:40:16.454 942.080 - 945.804: 0.6747% ( 30) 00:40:16.454 945.804 - 949.527: 0.6988% ( 20) 00:40:16.454 949.527 - 953.251: 0.7361% ( 31) 00:40:16.454 953.251 - 960.698: 0.7926% ( 47) 00:40:16.455 960.698 - 968.145: 0.8816% ( 74) 00:40:16.455 968.145 - 975.593: 0.9778% ( 80) 00:40:16.455 975.593 - 983.040: 1.0969% ( 99) 00:40:16.455 983.040 - 990.487: 1.1931% ( 80) 00:40:16.455 990.487 - 997.935: 1.3206% ( 106) 00:40:16.455 997.935 - 1005.382: 1.4337% ( 94) 00:40:16.455 1005.382 - 1012.829: 1.5672% ( 111) 00:40:16.455 1012.829 - 1020.276: 1.7091% ( 118) 00:40:16.455 1020.276 - 1027.724: 1.8691% ( 133) 00:40:16.455 1027.724 - 1035.171: 2.0459% ( 147) 00:40:16.455 1035.171 - 1042.618: 2.2287% ( 152) 00:40:16.455 1042.618 - 1050.065: 2.4103% ( 151) 00:40:16.455 1050.065 - 1057.513: 2.6340% ( 186) 00:40:16.455 1057.513 - 1064.960: 2.8361% ( 168) 00:40:16.455 1064.960 - 1072.407: 3.0814% ( 204) 00:40:16.455 1072.407 - 1079.855: 3.3160% ( 195) 00:40:16.455 1079.855 - 1087.302: 3.5962% ( 233) 00:40:16.455 1087.302 - 1094.749: 3.8680% ( 226) 00:40:16.455 1094.749 - 1102.196: 4.1988% ( 275) 00:40:16.455 1102.196 - 1109.644: 4.4959% ( 247) 00:40:16.455 1109.644 - 1117.091: 4.8194% ( 269) 00:40:16.455 1117.091 - 1124.538: 5.1838% ( 303) 00:40:16.455 1124.538 - 1131.985: 5.5663% ( 318) 00:40:16.455 1131.985 - 1139.433: 5.9813% ( 345) 00:40:16.455 1139.433 - 1146.880: 6.4106% ( 357) 00:40:16.455 1146.880 - 1154.327: 6.8941% ( 402) 00:40:16.455 1154.327 - 1161.775: 7.4065% ( 426) 00:40:16.455 1161.775 - 1169.222: 7.9201% ( 427) 00:40:16.455 1169.222 - 1176.669: 8.4457% ( 437) 00:40:16.455 1176.669 - 1184.116: 9.0290% ( 485) 00:40:16.455 1184.116 - 1191.564: 9.6051% ( 479) 00:40:16.455 1191.564 - 1199.011: 10.2053% ( 499) 00:40:16.455 1199.011 - 1206.458: 10.8404% ( 528) 00:40:16.455 1206.458 - 1213.905: 11.5175% ( 563) 00:40:16.455 1213.905 - 1221.353: 12.2440% ( 604) 00:40:16.455 1221.353 - 1228.800: 12.8959% ( 542) 00:40:16.455 1228.800 - 1236.247: 13.6392% ( 618) 00:40:16.455 1236.247 - 1243.695: 14.4690% ( 690) 00:40:16.455 1243.695 - 1251.142: 15.2953% ( 687) 00:40:16.455 1251.142 - 1258.589: 16.1397% ( 702) 00:40:16.455 1258.589 - 1266.036: 17.0297% ( 740) 00:40:16.455 1266.036 - 1273.484: 17.8728% ( 701) 00:40:16.455 1273.484 - 1280.931: 18.7653% ( 742) 00:40:16.455 1280.931 - 1288.378: 19.7010% ( 778) 00:40:16.455 1288.378 - 1295.825: 20.5970% ( 745) 00:40:16.455 1295.825 - 1303.273: 21.5713% ( 810) 00:40:16.455 1303.273 - 1310.720: 22.5455% ( 810) 00:40:16.455 1310.720 - 1318.167: 23.5041% ( 797) 00:40:16.455 1318.167 - 1325.615: 24.4988% ( 827) 00:40:16.455 1325.615 - 1333.062: 25.5127% ( 843) 00:40:16.455 1333.062 - 1340.509: 26.6553% ( 950) 00:40:16.455 1340.509 - 1347.956: 27.7654% ( 923) 00:40:16.455 1347.956 - 1355.404: 29.0091% ( 1034) 00:40:16.455 1355.404 - 1362.851: 30.1384% ( 939) 00:40:16.455 1362.851 - 1370.298: 31.2847% ( 953) 00:40:16.455 1370.298 - 1377.745: 32.4188% ( 943) 00:40:16.455 1377.745 - 1385.193: 33.5362% ( 929) 00:40:16.455 1385.193 - 1392.640: 34.7426% ( 1003) 00:40:16.455 1392.640 - 1400.087: 35.8479% ( 919) 00:40:16.455 1400.087 - 1407.535: 36.9881% ( 948) 00:40:16.455 1407.535 - 1414.982: 38.0188% ( 857) 00:40:16.455 1414.982 - 1422.429: 39.1374% ( 930) 00:40:16.455 1422.429 - 1429.876: 40.2391% ( 916) 00:40:16.455 1429.876 - 1437.324: 41.3444% ( 919) 00:40:16.455 1437.324 - 1444.771: 42.5893% ( 1035) 00:40:16.455 1444.771 - 1452.218: 43.8016% ( 1008) 00:40:16.455 1452.218 - 1459.665: 44.8889% ( 904) 00:40:16.455 1459.665 - 1467.113: 46.1121% ( 1017) 00:40:16.455 1467.113 - 1474.560: 47.3016% ( 989) 00:40:16.455 1474.560 - 1482.007: 48.5296% ( 1021) 00:40:16.455 1482.007 - 1489.455: 49.5989% ( 889) 00:40:16.455 1489.455 - 1496.902: 50.6537% ( 877) 00:40:16.455 1496.902 - 1504.349: 51.7265% ( 892) 00:40:16.455 1504.349 - 1511.796: 52.8295% ( 917) 00:40:16.455 1511.796 - 1519.244: 53.9661% ( 945) 00:40:16.455 1519.244 - 1526.691: 54.9848% ( 847) 00:40:16.455 1526.691 - 1534.138: 56.0216% ( 862) 00:40:16.455 1534.138 - 1541.585: 57.0716% ( 873) 00:40:16.455 1541.585 - 1549.033: 58.0915% ( 848) 00:40:16.455 1549.033 - 1556.480: 59.0260% ( 777) 00:40:16.455 1556.480 - 1563.927: 60.0471% ( 849) 00:40:16.455 1563.927 - 1571.375: 61.1224% ( 894) 00:40:16.455 1571.375 - 1578.822: 62.1712% ( 872) 00:40:16.455 1578.822 - 1586.269: 63.1202% ( 789) 00:40:16.455 1586.269 - 1593.716: 64.1245% ( 835) 00:40:16.455 1593.716 - 1601.164: 65.1612% ( 862) 00:40:16.455 1601.164 - 1608.611: 66.1619% ( 832) 00:40:16.455 1608.611 - 1616.058: 67.1758% ( 843) 00:40:16.455 1616.058 - 1623.505: 68.2258% ( 873) 00:40:16.455 1623.505 - 1630.953: 69.0329% ( 671) 00:40:16.455 1630.953 - 1638.400: 69.8628% ( 690) 00:40:16.455 1638.400 - 1645.847: 70.6602% ( 663) 00:40:16.455 1645.847 - 1653.295: 71.4985% ( 697) 00:40:16.455 1653.295 - 1660.742: 72.2298% ( 608) 00:40:16.455 1660.742 - 1668.189: 73.0633% ( 693) 00:40:16.455 1668.189 - 1675.636: 73.8559% ( 659) 00:40:16.455 1675.636 - 1683.084: 74.5318% ( 562) 00:40:16.455 1683.084 - 1690.531: 75.2823% ( 624) 00:40:16.455 1690.531 - 1697.978: 76.0461% ( 635) 00:40:16.455 1697.978 - 1705.425: 76.7858% ( 615) 00:40:16.455 1705.425 - 1712.873: 77.4858% ( 582) 00:40:16.455 1712.873 - 1720.320: 78.1762% ( 574) 00:40:16.455 1720.320 - 1727.767: 78.7896% ( 510) 00:40:16.455 1727.767 - 1735.215: 79.3753% ( 487) 00:40:16.455 1735.215 - 1742.662: 79.9418% ( 471) 00:40:16.455 1742.662 - 1750.109: 80.5131% ( 475) 00:40:16.455 1750.109 - 1757.556: 81.1674% ( 544) 00:40:16.455 1757.556 - 1765.004: 81.7206% ( 460) 00:40:16.455 1765.004 - 1772.451: 82.2150% ( 411) 00:40:16.455 1772.451 - 1779.898: 82.7394% ( 436) 00:40:16.455 1779.898 - 1787.345: 83.1772% ( 364) 00:40:16.455 1787.345 - 1794.793: 83.6318% ( 378) 00:40:16.455 1794.793 - 1802.240: 84.0732% ( 367) 00:40:16.455 1802.240 - 1809.687: 84.5243% ( 375) 00:40:16.455 1809.687 - 1817.135: 84.9861% ( 384) 00:40:16.455 1817.135 - 1824.582: 85.4720% ( 404) 00:40:16.455 1824.582 - 1832.029: 85.9483% ( 396) 00:40:16.455 1832.029 - 1839.476: 86.3657% ( 347) 00:40:16.455 1839.476 - 1846.924: 86.7397% ( 311) 00:40:16.455 1846.924 - 1854.371: 87.0993% ( 299) 00:40:16.455 1854.371 - 1861.818: 87.5059% ( 338) 00:40:16.455 1861.818 - 1869.265: 87.9136% ( 339) 00:40:16.455 1869.265 - 1876.713: 88.3129% ( 332) 00:40:16.455 1876.713 - 1884.160: 88.7423% ( 357) 00:40:16.455 1884.160 - 1891.607: 89.1296% ( 322) 00:40:16.455 1891.607 - 1899.055: 89.4772% ( 289) 00:40:16.455 1899.055 - 1906.502: 89.8452% ( 306) 00:40:16.455 1906.502 - 1921.396: 90.5584% ( 593) 00:40:16.455 1921.396 - 1936.291: 91.1646% ( 504) 00:40:16.455 1936.291 - 1951.185: 91.7395% ( 478) 00:40:16.455 1951.185 - 1966.080: 92.2639% ( 436) 00:40:16.455 1966.080 - 1980.975: 92.8521% ( 489) 00:40:16.455 1980.975 - 1995.869: 93.3344% ( 401) 00:40:16.455 1995.869 - 2010.764: 93.7662% ( 359) 00:40:16.455 2010.764 - 2025.658: 94.1727% ( 338) 00:40:16.455 2025.658 - 2040.553: 94.5792% ( 338) 00:40:16.455 2040.553 - 2055.447: 95.0603% ( 400) 00:40:16.455 2055.447 - 2070.342: 95.4163% ( 296) 00:40:16.455 2070.342 - 2085.236: 95.7110% ( 245) 00:40:16.455 2085.236 - 2100.131: 96.0297% ( 265) 00:40:16.455 2100.131 - 2115.025: 96.3713% ( 284) 00:40:16.455 2115.025 - 2129.920: 96.6491% ( 231) 00:40:16.455 2129.920 - 2144.815: 96.9655% ( 263) 00:40:16.455 2144.815 - 2159.709: 97.2265% ( 217) 00:40:16.455 2159.709 - 2174.604: 97.4225% ( 163) 00:40:16.455 2174.604 - 2189.498: 97.6210% ( 165) 00:40:16.455 2189.498 - 2204.393: 97.8074% ( 155) 00:40:16.455 2204.393 - 2219.287: 97.9541% ( 122) 00:40:16.455 2219.287 - 2234.182: 98.1129% ( 132) 00:40:16.455 2234.182 - 2249.076: 98.2404% ( 106) 00:40:16.455 2249.076 - 2263.971: 98.3498% ( 91) 00:40:16.455 2263.971 - 2278.865: 98.4593% ( 91) 00:40:16.455 2278.865 - 2293.760: 98.5507% ( 76) 00:40:16.455 2293.760 - 2308.655: 98.6421% ( 76) 00:40:16.455 2308.655 - 2323.549: 98.7443% ( 85) 00:40:16.455 2323.549 - 2338.444: 98.8177% ( 61) 00:40:16.455 2338.444 - 2353.338: 98.8790% ( 51) 00:40:16.455 2353.338 - 2368.233: 98.9296% ( 42) 00:40:16.455 2368.233 - 2383.127: 98.9765% ( 39) 00:40:16.455 2383.127 - 2398.022: 99.0270% ( 42) 00:40:16.455 2398.022 - 2412.916: 99.0667% ( 33) 00:40:16.455 2412.916 - 2427.811: 99.1064% ( 33) 00:40:16.455 2427.811 - 2442.705: 99.1497% ( 36) 00:40:16.455 2442.705 - 2457.600: 99.1845% ( 29) 00:40:16.455 2457.600 - 2472.495: 99.2158% ( 26) 00:40:16.455 2472.495 - 2487.389: 99.2495% ( 28) 00:40:16.455 2487.389 - 2502.284: 99.2771% ( 23) 00:40:16.455 2502.284 - 2517.178: 99.3012% ( 20) 00:40:16.455 2517.178 - 2532.073: 99.3229% ( 18) 00:40:16.455 2532.073 - 2546.967: 99.3445% ( 18) 00:40:16.455 2546.967 - 2561.862: 99.3649% ( 17) 00:40:16.455 2561.862 - 2576.756: 99.3830% ( 15) 00:40:16.455 2576.756 - 2591.651: 99.4022% ( 16) 00:40:16.455 2591.651 - 2606.545: 99.4155% ( 11) 00:40:16.455 2606.545 - 2621.440: 99.4323% ( 14) 00:40:16.455 2621.440 - 2636.335: 99.4431% ( 9) 00:40:16.455 2636.335 - 2651.229: 99.4564% ( 11) 00:40:16.455 2651.229 - 2666.124: 99.4684% ( 10) 00:40:16.455 2666.124 - 2681.018: 99.4852% ( 14) 00:40:16.455 2681.018 - 2695.913: 99.4973% ( 10) 00:40:16.455 2695.913 - 2710.807: 99.5057% ( 7) 00:40:16.455 2710.807 - 2725.702: 99.5189% ( 11) 00:40:16.455 2725.702 - 2740.596: 99.5285% ( 8) 00:40:16.455 2740.596 - 2755.491: 99.5478% ( 16) 00:40:16.455 2755.491 - 2770.385: 99.5622% ( 12) 00:40:16.455 2770.385 - 2785.280: 99.5730% ( 9) 00:40:16.455 2785.280 - 2800.175: 99.5838% ( 9) 00:40:16.455 2800.175 - 2815.069: 99.6031% ( 16) 00:40:16.455 2815.069 - 2829.964: 99.6223% ( 16) 00:40:16.455 2829.964 - 2844.858: 99.6344% ( 10) 00:40:16.455 2844.858 - 2859.753: 99.6464% ( 10) 00:40:16.455 2859.753 - 2874.647: 99.6524% ( 5) 00:40:16.455 2874.647 - 2889.542: 99.6608% ( 7) 00:40:16.456 2889.542 - 2904.436: 99.6680% ( 6) 00:40:16.456 2904.436 - 2919.331: 99.6753% ( 6) 00:40:16.456 2919.331 - 2934.225: 99.6813% ( 5) 00:40:16.456 2934.225 - 2949.120: 99.6861% ( 4) 00:40:16.456 2949.120 - 2964.015: 99.6921% ( 5) 00:40:16.456 2964.015 - 2978.909: 99.6933% ( 1) 00:40:16.456 2978.909 - 2993.804: 99.6969% ( 3) 00:40:16.456 2993.804 - 3008.698: 99.7005% ( 3) 00:40:16.456 3008.698 - 3023.593: 99.7017% ( 1) 00:40:16.456 3023.593 - 3038.487: 99.7077% ( 5) 00:40:16.456 3038.487 - 3053.382: 99.7113% ( 3) 00:40:16.456 3053.382 - 3068.276: 99.7125% ( 1) 00:40:16.456 3068.276 - 3083.171: 99.7137% ( 1) 00:40:16.456 3083.171 - 3098.065: 99.7162% ( 2) 00:40:16.456 3098.065 - 3112.960: 99.7174% ( 1) 00:40:16.456 3112.960 - 3127.855: 99.7198% ( 2) 00:40:16.456 3142.749 - 3157.644: 99.7222% ( 2) 00:40:16.456 3157.644 - 3172.538: 99.7234% ( 1) 00:40:16.456 3172.538 - 3187.433: 99.7294% ( 5) 00:40:16.456 3187.433 - 3202.327: 99.7330% ( 3) 00:40:16.456 3202.327 - 3217.222: 99.7354% ( 2) 00:40:16.456 3217.222 - 3232.116: 99.7378% ( 2) 00:40:16.456 3232.116 - 3247.011: 99.7402% ( 2) 00:40:16.456 3247.011 - 3261.905: 99.7426% ( 2) 00:40:16.456 3261.905 - 3276.800: 99.7474% ( 4) 00:40:16.456 3276.800 - 3291.695: 99.7498% ( 2) 00:40:16.456 3291.695 - 3306.589: 99.7546% ( 4) 00:40:16.456 3306.589 - 3321.484: 99.7582% ( 3) 00:40:16.456 3321.484 - 3336.378: 99.7655% ( 6) 00:40:16.456 3336.378 - 3351.273: 99.7739% ( 7) 00:40:16.456 3351.273 - 3366.167: 99.7775% ( 3) 00:40:16.456 3366.167 - 3381.062: 99.7811% ( 3) 00:40:16.456 3381.062 - 3395.956: 99.7859% ( 4) 00:40:16.456 3395.956 - 3410.851: 99.7871% ( 1) 00:40:16.456 3410.851 - 3425.745: 99.7883% ( 1) 00:40:16.456 3425.745 - 3440.640: 99.7907% ( 2) 00:40:16.456 3440.640 - 3455.535: 99.7919% ( 1) 00:40:16.456 3455.535 - 3470.429: 99.7931% ( 1) 00:40:16.456 3470.429 - 3485.324: 99.7955% ( 2) 00:40:16.456 3500.218 - 3515.113: 99.7967% ( 1) 00:40:16.456 3515.113 - 3530.007: 99.7979% ( 1) 00:40:16.456 3530.007 - 3544.902: 99.7991% ( 1) 00:40:16.456 3559.796 - 3574.691: 99.8015% ( 2) 00:40:16.456 3574.691 - 3589.585: 99.8027% ( 1) 00:40:16.456 3589.585 - 3604.480: 99.8040% ( 1) 00:40:16.456 3723.636 - 3738.531: 99.8052% ( 1) 00:40:16.456 3738.531 - 3753.425: 99.8064% ( 1) 00:40:16.456 3783.215 - 3798.109: 99.8088% ( 2) 00:40:16.456 3932.160 - 3961.949: 99.8112% ( 2) 00:40:16.456 3961.949 - 3991.738: 99.8124% ( 1) 00:40:16.456 3991.738 - 4021.527: 99.8136% ( 1) 00:40:16.456 4081.105 - 4110.895: 99.8160% ( 2) 00:40:16.456 4200.262 - 4230.051: 99.8184% ( 2) 00:40:16.456 4319.418 - 4349.207: 99.8196% ( 1) 00:40:16.456 4468.364 - 4498.153: 99.8208% ( 1) 00:40:16.456 4647.098 - 4676.887: 99.8232% ( 2) 00:40:16.456 4736.465 - 4766.255: 99.8244% ( 1) 00:40:16.456 4796.044 - 4825.833: 99.8268% ( 2) 00:40:16.456 4944.989 - 4974.778: 99.8280% ( 1) 00:40:16.456 5004.567 - 5034.356: 99.8304% ( 2) 00:40:16.456 5093.935 - 5123.724: 99.8316% ( 1) 00:40:16.456 5153.513 - 5183.302: 99.8328% ( 1) 00:40:16.456 5213.091 - 5242.880: 99.8340% ( 1) 00:40:16.456 5421.615 - 5451.404: 99.8352% ( 1) 00:40:16.456 5481.193 - 5510.982: 99.8364% ( 1) 00:40:16.456 5510.982 - 5540.771: 99.8376% ( 1) 00:40:16.456 5630.138 - 5659.927: 99.8388% ( 1) 00:40:16.456 5957.818 - 5987.607: 99.8400% ( 1) 00:40:16.456 6166.342 - 6196.131: 99.8412% ( 1) 00:40:16.456 6196.131 - 6225.920: 99.8436% ( 2) 00:40:16.456 6345.076 - 6374.865: 99.8448% ( 1) 00:40:16.456 6553.600 - 6583.389: 99.8473% ( 2) 00:40:16.456 6851.491 - 6881.280: 99.8485% ( 1) 00:40:16.456 7089.804 - 7119.593: 99.8497% ( 1) 00:40:16.456 7149.382 - 7179.171: 99.8509% ( 1) 00:40:16.456 7179.171 - 7208.960: 99.8533% ( 2) 00:40:16.456 7387.695 - 7417.484: 99.8545% ( 1) 00:40:16.456 7566.429 - 7596.218: 99.8569% ( 2) 00:40:16.456 7596.218 - 7626.007: 99.8581% ( 1) 00:40:16.456 7685.585 - 7745.164: 99.8605% ( 2) 00:40:16.456 7745.164 - 7804.742: 99.8617% ( 1) 00:40:16.456 7983.476 - 8043.055: 99.8629% ( 1) 00:40:16.456 8400.524 - 8460.102: 99.8641% ( 1) 00:40:16.456 8519.680 - 8579.258: 99.8665% ( 2) 00:40:16.456 8638.836 - 8698.415: 99.8677% ( 1) 00:40:16.456 8757.993 - 8817.571: 99.8689% ( 1) 00:40:16.456 8817.571 - 8877.149: 99.8701% ( 1) 00:40:16.456 8877.149 - 8936.727: 99.8713% ( 1) 00:40:16.456 8936.727 - 8996.305: 99.8737% ( 2) 00:40:16.456 8996.305 - 9055.884: 99.8761% ( 2) 00:40:16.456 9055.884 - 9115.462: 99.8797% ( 3) 00:40:16.456 9115.462 - 9175.040: 99.8833% ( 3) 00:40:16.456 9175.040 - 9234.618: 99.8845% ( 1) 00:40:16.456 9234.618 - 9294.196: 99.8906% ( 5) 00:40:16.456 9294.196 - 9353.775: 99.8942% ( 3) 00:40:16.456 9353.775 - 9413.353: 99.9050% ( 9) 00:40:16.456 9413.353 - 9472.931: 99.9134% ( 7) 00:40:16.456 9472.931 - 9532.509: 99.9314% ( 15) 00:40:16.456 9532.509 - 9592.087: 99.9387% ( 6) 00:40:16.456 9592.087 - 9651.665: 99.9435% ( 4) 00:40:16.456 9651.665 - 9711.244: 99.9471% ( 3) 00:40:16.456 9711.244 - 9770.822: 99.9543% ( 6) 00:40:16.456 9770.822 - 9830.400: 99.9579% ( 3) 00:40:16.456 9830.400 - 9889.978: 99.9627% ( 4) 00:40:16.456 9889.978 - 9949.556: 99.9651% ( 2) 00:40:16.456 9949.556 - 10009.135: 99.9663% ( 1) 00:40:16.456 10009.135 - 10068.713: 99.9675% ( 1) 00:40:16.456 10068.713 - 10128.291: 99.9711% ( 3) 00:40:16.456 10128.291 - 10187.869: 99.9723% ( 1) 00:40:16.456 10187.869 - 10247.447: 99.9735% ( 1) 00:40:16.456 10247.447 - 10307.025: 99.9808% ( 6) 00:40:16.456 10307.025 - 10366.604: 99.9844% ( 3) 00:40:16.456 10366.604 - 10426.182: 99.9904% ( 5) 00:40:16.456 10426.182 - 10485.760: 99.9916% ( 1) 00:40:16.456 10485.760 - 10545.338: 99.9952% ( 3) 00:40:16.456 10545.338 - 10604.916: 99.9976% ( 2) 00:40:16.456 10604.916 - 10664.495: 99.9988% ( 1) 00:40:16.456 10664.495 - 10724.073: 100.0000% ( 1) 00:40:16.456 00:40:16.456 12:22:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:40:16.456 00:40:16.456 real 0m2.593s 00:40:16.456 user 0m2.213s 00:40:16.456 sys 0m0.225s 00:40:16.456 12:22:15 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:16.456 ************************************ 00:40:16.456 END TEST nvme_perf 00:40:16.456 ************************************ 00:40:16.456 12:22:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:40:16.456 12:22:15 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:40:16.456 12:22:15 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:40:16.456 12:22:15 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:16.456 12:22:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:16.456 ************************************ 00:40:16.456 START TEST nvme_hello_world 00:40:16.456 ************************************ 00:40:16.456 12:22:15 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:40:16.713 Initializing NVMe Controllers 00:40:16.713 Attached to 0000:00:10.0 00:40:16.713 Namespace ID: 1 size: 5GB 00:40:16.713 Initialization complete. 00:40:16.713 INFO: using host memory buffer for IO 00:40:16.713 Hello world! 00:40:16.713 00:40:16.713 real 0m0.315s 00:40:16.713 user 0m0.112s 00:40:16.713 sys 0m0.107s 00:40:16.713 12:22:15 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:16.713 12:22:15 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:40:16.713 ************************************ 00:40:16.713 END TEST nvme_hello_world 00:40:16.713 ************************************ 00:40:16.713 12:22:15 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:40:16.713 12:22:15 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:16.713 12:22:15 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:16.713 12:22:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:16.713 ************************************ 00:40:16.713 START TEST nvme_sgl 00:40:16.713 ************************************ 00:40:16.713 12:22:15 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:40:16.971 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:40:16.971 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:40:16.971 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:40:16.971 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:40:16.971 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:40:16.971 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:40:16.971 NVMe Readv/Writev Request test 00:40:16.971 Attached to 0000:00:10.0 00:40:16.971 0000:00:10.0: build_io_request_2 test passed 00:40:16.971 0000:00:10.0: build_io_request_4 test passed 00:40:16.971 0000:00:10.0: build_io_request_5 test passed 00:40:16.971 0000:00:10.0: build_io_request_6 test passed 00:40:16.971 0000:00:10.0: build_io_request_7 test passed 00:40:16.971 0000:00:10.0: build_io_request_10 test passed 00:40:16.971 Cleaning up... 00:40:16.971 00:40:16.971 real 0m0.326s 00:40:16.971 user 0m0.149s 00:40:16.971 sys 0m0.107s 00:40:16.971 12:22:15 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:16.971 12:22:15 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:40:16.971 ************************************ 00:40:16.971 END TEST nvme_sgl 00:40:16.971 ************************************ 00:40:17.229 12:22:15 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:40:17.229 12:22:15 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:17.229 12:22:15 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:17.229 12:22:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:17.229 ************************************ 00:40:17.229 START TEST nvme_e2edp 00:40:17.229 ************************************ 00:40:17.229 12:22:15 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:40:17.486 NVMe Write/Read with End-to-End data protection test 00:40:17.486 Attached to 0000:00:10.0 00:40:17.486 Cleaning up... 00:40:17.486 00:40:17.486 real 0m0.288s 00:40:17.486 user 0m0.097s 00:40:17.486 sys 0m0.126s 00:40:17.486 12:22:16 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:17.486 12:22:16 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:40:17.486 ************************************ 00:40:17.486 END TEST nvme_e2edp 00:40:17.486 ************************************ 00:40:17.486 12:22:16 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:40:17.486 12:22:16 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:17.486 12:22:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:17.486 12:22:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:17.486 ************************************ 00:40:17.486 START TEST nvme_reserve 00:40:17.486 ************************************ 00:40:17.486 12:22:16 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:40:17.744 ===================================================== 00:40:17.744 NVMe Controller at PCI bus 0, device 16, function 0 00:40:17.744 ===================================================== 00:40:17.744 Reservations: Not Supported 00:40:17.744 Reservation test passed 00:40:17.744 00:40:17.744 real 0m0.314s 00:40:17.744 user 0m0.093s 00:40:17.744 sys 0m0.127s 00:40:17.744 12:22:16 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:17.744 12:22:16 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:40:17.744 ************************************ 00:40:17.744 END TEST nvme_reserve 00:40:17.744 ************************************ 00:40:17.744 12:22:16 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:40:17.744 12:22:16 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:17.744 12:22:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:17.744 12:22:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:17.744 ************************************ 00:40:17.744 START TEST nvme_err_injection 00:40:17.744 ************************************ 00:40:17.744 12:22:16 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:40:18.318 NVMe Error Injection test 00:40:18.318 Attached to 0000:00:10.0 00:40:18.318 0000:00:10.0: get features failed as expected 00:40:18.318 0000:00:10.0: get features successfully as expected 00:40:18.318 0000:00:10.0: read failed as expected 00:40:18.318 0000:00:10.0: read successfully as expected 00:40:18.318 Cleaning up... 00:40:18.318 00:40:18.318 real 0m0.307s 00:40:18.318 user 0m0.088s 00:40:18.318 sys 0m0.142s 00:40:18.318 12:22:16 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:18.318 ************************************ 00:40:18.318 END TEST nvme_err_injection 00:40:18.318 12:22:16 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:40:18.318 ************************************ 00:40:18.318 12:22:16 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:40:18.318 12:22:16 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:40:18.318 12:22:16 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:18.318 12:22:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:18.318 ************************************ 00:40:18.318 START TEST nvme_overhead 00:40:18.318 ************************************ 00:40:18.318 12:22:16 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:40:19.688 Initializing NVMe Controllers 00:40:19.688 Attached to 0000:00:10.0 00:40:19.688 Initialization complete. Launching workers. 00:40:19.688 submit (in ns) avg, min, max = 16276.2, 11582.7, 93821.8 00:40:19.688 complete (in ns) avg, min, max = 10305.3, 7193.6, 126269.1 00:40:19.688 00:40:19.689 Submit histogram 00:40:19.689 ================ 00:40:19.689 Range in us Cumulative Count 00:40:19.689 11.578 - 11.636: 0.0124% ( 1) 00:40:19.689 11.869 - 11.927: 0.0620% ( 4) 00:40:19.689 11.927 - 11.985: 0.8436% ( 63) 00:40:19.689 11.985 - 12.044: 2.7044% ( 150) 00:40:19.689 12.044 - 12.102: 4.8133% ( 170) 00:40:19.689 12.102 - 12.160: 6.3516% ( 124) 00:40:19.689 12.160 - 12.218: 7.1579% ( 65) 00:40:19.689 12.218 - 12.276: 7.4557% ( 24) 00:40:19.689 12.276 - 12.335: 7.6541% ( 16) 00:40:19.689 12.335 - 12.393: 8.0139% ( 29) 00:40:19.689 12.393 - 12.451: 9.3041% ( 104) 00:40:19.689 12.451 - 12.509: 12.2689% ( 239) 00:40:19.689 12.509 - 12.567: 15.9161% ( 294) 00:40:19.689 12.567 - 12.625: 19.4269% ( 283) 00:40:19.689 12.625 - 12.684: 22.5778% ( 254) 00:40:19.689 12.684 - 12.742: 25.6544% ( 248) 00:40:19.689 12.742 - 12.800: 29.5249% ( 312) 00:40:19.689 12.800 - 12.858: 33.7055% ( 337) 00:40:19.689 12.858 - 12.916: 36.8565% ( 254) 00:40:19.689 12.916 - 12.975: 39.5360% ( 216) 00:40:19.689 12.975 - 13.033: 41.7194% ( 176) 00:40:19.689 13.033 - 13.091: 44.0516% ( 188) 00:40:19.689 13.091 - 13.149: 45.8504% ( 145) 00:40:19.689 13.149 - 13.207: 47.6740% ( 147) 00:40:19.689 13.207 - 13.265: 48.8029% ( 91) 00:40:19.689 13.265 - 13.324: 49.6837% ( 71) 00:40:19.689 13.324 - 13.382: 50.2419% ( 45) 00:40:19.689 13.382 - 13.440: 50.9986% ( 61) 00:40:19.689 13.440 - 13.498: 51.7678% ( 62) 00:40:19.689 13.498 - 13.556: 52.3756% ( 49) 00:40:19.689 13.556 - 13.615: 52.7850% ( 33) 00:40:19.689 13.615 - 13.673: 53.0827% ( 24) 00:40:19.689 13.673 - 13.731: 53.3184% ( 19) 00:40:19.689 13.731 - 13.789: 53.4921% ( 14) 00:40:19.689 13.789 - 13.847: 53.5914% ( 8) 00:40:19.689 13.847 - 13.905: 53.7402% ( 12) 00:40:19.689 13.905 - 13.964: 53.8395% ( 8) 00:40:19.689 13.964 - 14.022: 53.9263% ( 7) 00:40:19.689 14.022 - 14.080: 53.9883% ( 5) 00:40:19.689 14.080 - 14.138: 54.0380% ( 4) 00:40:19.689 14.138 - 14.196: 54.0752% ( 3) 00:40:19.689 14.196 - 14.255: 54.1620% ( 7) 00:40:19.689 14.255 - 14.313: 54.2364% ( 6) 00:40:19.689 14.313 - 14.371: 54.2861% ( 4) 00:40:19.689 14.371 - 14.429: 54.3233% ( 3) 00:40:19.689 14.429 - 14.487: 54.3481% ( 2) 00:40:19.689 14.487 - 14.545: 54.3853% ( 3) 00:40:19.689 14.545 - 14.604: 54.3977% ( 1) 00:40:19.689 14.604 - 14.662: 54.4473% ( 4) 00:40:19.689 14.662 - 14.720: 54.4846% ( 3) 00:40:19.689 14.720 - 14.778: 54.4970% ( 1) 00:40:19.689 14.778 - 14.836: 54.5218% ( 2) 00:40:19.689 14.836 - 14.895: 54.5342% ( 1) 00:40:19.689 14.895 - 15.011: 54.5590% ( 2) 00:40:19.689 15.011 - 15.127: 54.5714% ( 1) 00:40:19.689 15.127 - 15.244: 54.5838% ( 1) 00:40:19.689 15.244 - 15.360: 54.5962% ( 1) 00:40:19.689 15.360 - 15.476: 54.6210% ( 2) 00:40:19.689 15.476 - 15.593: 54.6334% ( 1) 00:40:19.689 15.593 - 15.709: 54.6582% ( 2) 00:40:19.689 15.825 - 15.942: 54.6706% ( 1) 00:40:19.689 15.942 - 16.058: 54.6954% ( 2) 00:40:19.689 16.058 - 16.175: 54.7079% ( 1) 00:40:19.689 16.407 - 16.524: 54.7699% ( 5) 00:40:19.689 16.524 - 16.640: 54.7823% ( 1) 00:40:19.689 16.756 - 16.873: 54.7947% ( 1) 00:40:19.689 16.989 - 17.105: 54.8071% ( 1) 00:40:19.689 17.105 - 17.222: 54.8195% ( 1) 00:40:19.689 17.222 - 17.338: 54.8443% ( 2) 00:40:19.689 17.338 - 17.455: 55.4770% ( 51) 00:40:19.689 17.455 - 17.571: 62.6349% ( 577) 00:40:19.689 17.571 - 17.687: 73.4524% ( 872) 00:40:19.689 17.687 - 17.804: 79.2209% ( 465) 00:40:19.689 17.804 - 17.920: 81.4663% ( 181) 00:40:19.689 17.920 - 18.036: 83.5008% ( 164) 00:40:19.689 18.036 - 18.153: 84.8158% ( 106) 00:40:19.689 18.153 - 18.269: 85.4361% ( 50) 00:40:19.689 18.269 - 18.385: 85.8330% ( 32) 00:40:19.689 18.385 - 18.502: 86.1432% ( 25) 00:40:19.689 18.502 - 18.618: 86.3044% ( 13) 00:40:19.689 18.618 - 18.735: 86.4657% ( 13) 00:40:19.689 18.735 - 18.851: 86.5773% ( 9) 00:40:19.689 18.851 - 18.967: 86.6766% ( 8) 00:40:19.689 18.967 - 19.084: 86.7758% ( 8) 00:40:19.689 19.084 - 19.200: 86.8255% ( 4) 00:40:19.689 19.200 - 19.316: 86.8627% ( 3) 00:40:19.689 19.316 - 19.433: 86.9867% ( 10) 00:40:19.689 19.433 - 19.549: 87.0115% ( 2) 00:40:19.689 19.549 - 19.665: 87.1232% ( 9) 00:40:19.689 19.665 - 19.782: 87.2100% ( 7) 00:40:19.689 19.782 - 19.898: 87.3093% ( 8) 00:40:19.689 19.898 - 20.015: 87.4085% ( 8) 00:40:19.689 20.015 - 20.131: 87.4953% ( 7) 00:40:19.689 20.131 - 20.247: 87.5946% ( 8) 00:40:19.689 20.247 - 20.364: 87.6938% ( 8) 00:40:19.689 20.364 - 20.480: 87.8427% ( 12) 00:40:19.689 20.480 - 20.596: 87.9047% ( 5) 00:40:19.689 20.596 - 20.713: 88.0040% ( 8) 00:40:19.689 20.713 - 20.829: 88.1032% ( 8) 00:40:19.689 20.829 - 20.945: 88.2149% ( 9) 00:40:19.689 20.945 - 21.062: 88.2645% ( 4) 00:40:19.689 21.062 - 21.178: 88.3265% ( 5) 00:40:19.689 21.178 - 21.295: 88.3637% ( 3) 00:40:19.689 21.295 - 21.411: 88.4133% ( 4) 00:40:19.689 21.411 - 21.527: 88.4382% ( 2) 00:40:19.689 21.527 - 21.644: 88.4878% ( 4) 00:40:19.689 21.644 - 21.760: 88.5126% ( 2) 00:40:19.689 21.760 - 21.876: 88.5374% ( 2) 00:40:19.689 21.876 - 21.993: 88.5870% ( 4) 00:40:19.689 21.993 - 22.109: 88.5994% ( 1) 00:40:19.689 22.109 - 22.225: 88.6366% ( 3) 00:40:19.689 22.225 - 22.342: 88.6491% ( 1) 00:40:19.689 22.342 - 22.458: 88.6863% ( 3) 00:40:19.689 22.458 - 22.575: 88.7235% ( 3) 00:40:19.689 22.575 - 22.691: 88.7359% ( 1) 00:40:19.689 22.691 - 22.807: 88.7731% ( 3) 00:40:19.689 23.156 - 23.273: 88.7979% ( 2) 00:40:19.689 23.273 - 23.389: 88.8103% ( 1) 00:40:19.689 23.505 - 23.622: 88.8599% ( 4) 00:40:19.689 23.738 - 23.855: 88.8848% ( 2) 00:40:19.689 23.855 - 23.971: 88.9220% ( 3) 00:40:19.689 23.971 - 24.087: 88.9468% ( 2) 00:40:19.689 24.087 - 24.204: 88.9592% ( 1) 00:40:19.689 24.204 - 24.320: 88.9716% ( 1) 00:40:19.689 24.320 - 24.436: 88.9964% ( 2) 00:40:19.689 24.436 - 24.553: 89.0336% ( 3) 00:40:19.689 24.669 - 24.785: 89.0460% ( 1) 00:40:19.689 24.785 - 24.902: 89.0956% ( 4) 00:40:19.689 24.902 - 25.018: 89.1081% ( 1) 00:40:19.689 25.135 - 25.251: 89.1329% ( 2) 00:40:19.689 25.367 - 25.484: 89.1577% ( 2) 00:40:19.689 25.484 - 25.600: 89.2073% ( 4) 00:40:19.689 25.600 - 25.716: 89.2197% ( 1) 00:40:19.689 25.716 - 25.833: 89.2321% ( 1) 00:40:19.689 25.833 - 25.949: 89.2445% ( 1) 00:40:19.689 25.949 - 26.065: 89.2817% ( 3) 00:40:19.689 26.182 - 26.298: 89.2941% ( 1) 00:40:19.689 26.298 - 26.415: 89.3189% ( 2) 00:40:19.689 26.415 - 26.531: 89.3438% ( 2) 00:40:19.689 26.531 - 26.647: 89.4182% ( 6) 00:40:19.689 26.647 - 26.764: 89.6167% ( 16) 00:40:19.689 26.764 - 26.880: 90.1005% ( 39) 00:40:19.689 26.880 - 26.996: 90.6959% ( 48) 00:40:19.689 26.996 - 27.113: 91.6263% ( 75) 00:40:19.689 27.113 - 27.229: 92.4823% ( 69) 00:40:19.689 27.229 - 27.345: 93.0902% ( 49) 00:40:19.689 27.345 - 27.462: 93.8717% ( 63) 00:40:19.689 27.462 - 27.578: 94.4300% ( 45) 00:40:19.689 27.578 - 27.695: 95.0999% ( 54) 00:40:19.689 27.695 - 27.811: 95.7574% ( 53) 00:40:19.689 27.811 - 27.927: 96.4645% ( 57) 00:40:19.689 27.927 - 28.044: 96.9607% ( 40) 00:40:19.689 28.044 - 28.160: 97.4941% ( 43) 00:40:19.689 28.160 - 28.276: 97.9159% ( 34) 00:40:19.689 28.276 - 28.393: 98.2756% ( 29) 00:40:19.689 28.393 - 28.509: 98.5238% ( 20) 00:40:19.689 28.509 - 28.625: 98.6974% ( 14) 00:40:19.689 28.625 - 28.742: 98.8339% ( 11) 00:40:19.689 28.742 - 28.858: 98.8835% ( 4) 00:40:19.689 28.858 - 28.975: 98.9207% ( 3) 00:40:19.689 28.975 - 29.091: 98.9331% ( 1) 00:40:19.689 29.091 - 29.207: 99.0200% ( 7) 00:40:19.689 29.207 - 29.324: 99.0820% ( 5) 00:40:19.689 29.440 - 29.556: 99.1192% ( 3) 00:40:19.689 29.673 - 29.789: 99.1316% ( 1) 00:40:19.689 29.789 - 30.022: 99.1812% ( 4) 00:40:19.689 30.022 - 30.255: 99.1936% ( 1) 00:40:19.689 30.255 - 30.487: 99.2061% ( 1) 00:40:19.689 30.720 - 30.953: 99.2185% ( 1) 00:40:19.689 31.185 - 31.418: 99.2309% ( 1) 00:40:19.689 32.116 - 32.349: 99.2557% ( 2) 00:40:19.689 32.349 - 32.582: 99.2805% ( 2) 00:40:19.689 33.047 - 33.280: 99.2929% ( 1) 00:40:19.689 33.280 - 33.513: 99.3549% ( 5) 00:40:19.689 33.513 - 33.745: 99.3673% ( 1) 00:40:19.689 33.745 - 33.978: 99.4045% ( 3) 00:40:19.689 33.978 - 34.211: 99.4169% ( 1) 00:40:19.689 34.211 - 34.444: 99.4294% ( 1) 00:40:19.689 34.444 - 34.676: 99.4666% ( 3) 00:40:19.689 34.676 - 34.909: 99.4914% ( 2) 00:40:19.689 34.909 - 35.142: 99.5162% ( 2) 00:40:19.689 35.375 - 35.607: 99.5286% ( 1) 00:40:19.689 35.607 - 35.840: 99.5658% ( 3) 00:40:19.689 35.840 - 36.073: 99.5906% ( 2) 00:40:19.689 36.073 - 36.305: 99.6030% ( 1) 00:40:19.689 36.305 - 36.538: 99.6278% ( 2) 00:40:19.689 36.538 - 36.771: 99.6402% ( 1) 00:40:19.689 36.771 - 37.004: 99.6651% ( 2) 00:40:19.689 37.004 - 37.236: 99.6775% ( 1) 00:40:19.689 37.236 - 37.469: 99.6899% ( 1) 00:40:19.689 38.400 - 38.633: 99.7023% ( 1) 00:40:19.689 39.796 - 40.029: 99.7147% ( 1) 00:40:19.690 41.425 - 41.658: 99.7271% ( 1) 00:40:19.690 41.658 - 41.891: 99.7519% ( 2) 00:40:19.690 41.891 - 42.124: 99.7643% ( 1) 00:40:19.690 42.124 - 42.356: 99.7767% ( 1) 00:40:19.690 42.356 - 42.589: 99.8015% ( 2) 00:40:19.690 42.589 - 42.822: 99.8139% ( 1) 00:40:19.690 42.822 - 43.055: 99.8387% ( 2) 00:40:19.690 43.055 - 43.287: 99.8511% ( 1) 00:40:19.690 43.520 - 43.753: 99.8635% ( 1) 00:40:19.690 43.753 - 43.985: 99.8759% ( 1) 00:40:19.690 47.942 - 48.175: 99.8884% ( 1) 00:40:19.690 48.175 - 48.407: 99.9008% ( 1) 00:40:19.690 50.502 - 50.735: 99.9132% ( 1) 00:40:19.690 50.735 - 50.967: 99.9256% ( 1) 00:40:19.690 51.200 - 51.433: 99.9380% ( 1) 00:40:19.690 51.665 - 51.898: 99.9504% ( 1) 00:40:19.690 55.156 - 55.389: 99.9628% ( 1) 00:40:19.690 90.764 - 91.229: 99.9752% ( 1) 00:40:19.690 93.556 - 94.022: 100.0000% ( 2) 00:40:19.690 00:40:19.690 Complete histogram 00:40:19.690 ================== 00:40:19.690 Range in us Cumulative Count 00:40:19.690 7.185 - 7.215: 0.0248% ( 2) 00:40:19.690 7.215 - 7.244: 0.0868% ( 5) 00:40:19.690 7.244 - 7.273: 0.3349% ( 20) 00:40:19.690 7.273 - 7.302: 0.8312% ( 40) 00:40:19.690 7.302 - 7.331: 1.7616% ( 75) 00:40:19.690 7.331 - 7.360: 2.9649% ( 97) 00:40:19.690 7.360 - 7.389: 4.1186% ( 93) 00:40:19.690 7.389 - 7.418: 4.6892% ( 46) 00:40:19.690 7.418 - 7.447: 4.9622% ( 22) 00:40:19.690 7.447 - 7.505: 5.4336% ( 38) 00:40:19.690 7.505 - 7.564: 6.3640% ( 75) 00:40:19.690 7.564 - 7.622: 8.1876% ( 147) 00:40:19.690 7.622 - 7.680: 11.3137% ( 252) 00:40:19.690 7.680 - 7.738: 13.9189% ( 210) 00:40:19.690 7.738 - 7.796: 15.5688% ( 133) 00:40:19.690 7.796 - 7.855: 18.3724% ( 226) 00:40:19.690 7.855 - 7.913: 23.3346% ( 400) 00:40:19.690 7.913 - 7.971: 28.8674% ( 446) 00:40:19.690 7.971 - 8.029: 31.2864% ( 195) 00:40:19.690 8.029 - 8.087: 34.0280% ( 221) 00:40:19.690 8.087 - 8.145: 40.4416% ( 517) 00:40:19.690 8.145 - 8.204: 45.3542% ( 396) 00:40:19.690 8.204 - 8.262: 47.2894% ( 156) 00:40:19.690 8.262 - 8.320: 48.6912% ( 113) 00:40:19.690 8.320 - 8.378: 51.5693% ( 232) 00:40:19.690 8.378 - 8.436: 53.8643% ( 185) 00:40:19.690 8.436 - 8.495: 54.9063% ( 84) 00:40:19.690 8.495 - 8.553: 55.3902% ( 39) 00:40:19.690 8.553 - 8.611: 56.0849% ( 56) 00:40:19.690 8.611 - 8.669: 56.7423% ( 53) 00:40:19.690 8.669 - 8.727: 57.2634% ( 42) 00:40:19.690 8.727 - 8.785: 57.4743% ( 17) 00:40:19.690 8.785 - 8.844: 57.6231% ( 12) 00:40:19.690 8.844 - 8.902: 57.7968% ( 14) 00:40:19.690 8.902 - 8.960: 57.9581% ( 13) 00:40:19.690 8.960 - 9.018: 58.0945% ( 11) 00:40:19.690 9.018 - 9.076: 58.1938% ( 8) 00:40:19.690 9.076 - 9.135: 58.2682% ( 6) 00:40:19.690 9.135 - 9.193: 58.3799% ( 9) 00:40:19.690 9.193 - 9.251: 58.4543% ( 6) 00:40:19.690 9.251 - 9.309: 58.4667% ( 1) 00:40:19.690 9.309 - 9.367: 58.4915% ( 2) 00:40:19.690 9.367 - 9.425: 58.5287% ( 3) 00:40:19.690 9.425 - 9.484: 58.5659% ( 3) 00:40:19.690 9.484 - 9.542: 58.5907% ( 2) 00:40:19.690 9.658 - 9.716: 58.6032% ( 1) 00:40:19.690 9.716 - 9.775: 58.6528% ( 4) 00:40:19.690 9.775 - 9.833: 58.6652% ( 1) 00:40:19.690 9.833 - 9.891: 58.6900% ( 2) 00:40:19.690 9.949 - 10.007: 58.7024% ( 1) 00:40:19.690 10.007 - 10.065: 58.7148% ( 1) 00:40:19.690 10.065 - 10.124: 58.7396% ( 2) 00:40:19.690 10.182 - 10.240: 58.7520% ( 1) 00:40:19.690 10.415 - 10.473: 58.7768% ( 2) 00:40:19.690 10.531 - 10.589: 58.7892% ( 1) 00:40:19.690 10.996 - 11.055: 58.8016% ( 1) 00:40:19.690 11.171 - 11.229: 58.8637% ( 5) 00:40:19.690 11.229 - 11.287: 60.9230% ( 166) 00:40:19.690 11.287 - 11.345: 71.3807% ( 843) 00:40:19.690 11.345 - 11.404: 82.5084% ( 897) 00:40:19.690 11.404 - 11.462: 86.7014% ( 338) 00:40:19.690 11.462 - 11.520: 88.2645% ( 126) 00:40:19.690 11.520 - 11.578: 88.6863% ( 34) 00:40:19.690 11.578 - 11.636: 88.9468% ( 21) 00:40:19.690 11.636 - 11.695: 89.2197% ( 22) 00:40:19.690 11.695 - 11.753: 89.4306% ( 17) 00:40:19.690 11.753 - 11.811: 89.6787% ( 20) 00:40:19.690 11.811 - 11.869: 89.8648% ( 15) 00:40:19.690 11.869 - 11.927: 89.9640% ( 8) 00:40:19.690 11.927 - 11.985: 90.1005% ( 11) 00:40:19.690 11.985 - 12.044: 90.1873% ( 7) 00:40:19.690 12.044 - 12.102: 90.3114% ( 10) 00:40:19.690 12.102 - 12.160: 90.4851% ( 14) 00:40:19.690 12.160 - 12.218: 90.5595% ( 6) 00:40:19.690 12.218 - 12.276: 90.6463% ( 7) 00:40:19.690 12.276 - 12.335: 90.7208% ( 6) 00:40:19.690 12.335 - 12.393: 90.7580% ( 3) 00:40:19.690 12.393 - 12.451: 90.8200% ( 5) 00:40:19.690 12.451 - 12.509: 90.8820% ( 5) 00:40:19.690 12.509 - 12.567: 90.9192% ( 3) 00:40:19.690 12.567 - 12.625: 90.9565% ( 3) 00:40:19.690 12.625 - 12.684: 90.9937% ( 3) 00:40:19.690 12.684 - 12.742: 91.0309% ( 3) 00:40:19.690 12.742 - 12.800: 91.0805% ( 4) 00:40:19.690 12.800 - 12.858: 91.1177% ( 3) 00:40:19.690 12.858 - 12.916: 91.1549% ( 3) 00:40:19.690 12.916 - 12.975: 91.1798% ( 2) 00:40:19.690 12.975 - 13.033: 91.2170% ( 3) 00:40:19.690 13.033 - 13.091: 91.2790% ( 5) 00:40:19.690 13.091 - 13.149: 91.2914% ( 1) 00:40:19.690 13.149 - 13.207: 91.3534% ( 5) 00:40:19.690 13.207 - 13.265: 91.4155% ( 5) 00:40:19.690 13.265 - 13.324: 91.4403% ( 2) 00:40:19.690 13.324 - 13.382: 91.4651% ( 2) 00:40:19.690 13.382 - 13.440: 91.5023% ( 3) 00:40:19.690 13.440 - 13.498: 91.5519% ( 4) 00:40:19.690 13.498 - 13.556: 91.5891% ( 3) 00:40:19.690 13.556 - 13.615: 91.6512% ( 5) 00:40:19.690 13.615 - 13.673: 91.7008% ( 4) 00:40:19.690 13.673 - 13.731: 91.7132% ( 1) 00:40:19.690 13.731 - 13.789: 91.7504% ( 3) 00:40:19.690 13.789 - 13.847: 91.8496% ( 8) 00:40:19.690 13.847 - 13.905: 91.9117% ( 5) 00:40:19.690 13.905 - 13.964: 91.9613% ( 4) 00:40:19.690 13.964 - 14.022: 91.9985% ( 3) 00:40:19.690 14.022 - 14.080: 92.1350% ( 11) 00:40:19.690 14.080 - 14.138: 92.1474% ( 1) 00:40:19.690 14.138 - 14.196: 92.1722% ( 2) 00:40:19.690 14.196 - 14.255: 92.2342% ( 5) 00:40:19.690 14.255 - 14.313: 92.3211% ( 7) 00:40:19.690 14.313 - 14.371: 92.3955% ( 6) 00:40:19.690 14.371 - 14.429: 92.4327% ( 3) 00:40:19.690 14.429 - 14.487: 92.4947% ( 5) 00:40:19.690 14.487 - 14.545: 92.5443% ( 4) 00:40:19.690 14.545 - 14.604: 92.5692% ( 2) 00:40:19.690 14.604 - 14.662: 92.5816% ( 1) 00:40:19.690 14.778 - 14.836: 92.5940% ( 1) 00:40:19.690 14.836 - 14.895: 92.6064% ( 1) 00:40:19.690 14.895 - 15.011: 92.6188% ( 1) 00:40:19.690 15.011 - 15.127: 92.6312% ( 1) 00:40:19.690 15.127 - 15.244: 92.6560% ( 2) 00:40:19.690 15.244 - 15.360: 92.6808% ( 2) 00:40:19.690 15.360 - 15.476: 92.6932% ( 1) 00:40:19.690 15.476 - 15.593: 92.7056% ( 1) 00:40:19.690 15.593 - 15.709: 92.7180% ( 1) 00:40:19.690 15.709 - 15.825: 92.7304% ( 1) 00:40:19.690 15.825 - 15.942: 92.7428% ( 1) 00:40:19.690 15.942 - 16.058: 92.7552% ( 1) 00:40:19.690 16.058 - 16.175: 92.7676% ( 1) 00:40:19.690 16.291 - 16.407: 92.7801% ( 1) 00:40:19.690 16.524 - 16.640: 92.7925% ( 1) 00:40:19.690 16.640 - 16.756: 92.8049% ( 1) 00:40:19.690 16.873 - 16.989: 92.8173% ( 1) 00:40:19.690 17.105 - 17.222: 92.8545% ( 3) 00:40:19.690 17.222 - 17.338: 92.8793% ( 2) 00:40:19.690 17.455 - 17.571: 92.9041% ( 2) 00:40:19.690 17.571 - 17.687: 92.9165% ( 1) 00:40:19.690 17.687 - 17.804: 92.9289% ( 1) 00:40:19.690 17.804 - 17.920: 92.9537% ( 2) 00:40:19.690 17.920 - 18.036: 92.9785% ( 2) 00:40:19.690 18.036 - 18.153: 93.0033% ( 2) 00:40:19.690 18.153 - 18.269: 93.0406% ( 3) 00:40:19.690 18.269 - 18.385: 93.0902% ( 4) 00:40:19.690 18.385 - 18.502: 93.1398% ( 4) 00:40:19.690 18.502 - 18.618: 93.2266% ( 7) 00:40:19.690 18.618 - 18.735: 93.2639% ( 3) 00:40:19.690 18.735 - 18.851: 93.2887% ( 2) 00:40:19.690 18.851 - 18.967: 93.4127% ( 10) 00:40:19.690 18.967 - 19.084: 93.4996% ( 7) 00:40:19.690 19.084 - 19.200: 93.6112% ( 9) 00:40:19.690 19.200 - 19.316: 93.7849% ( 14) 00:40:19.690 19.316 - 19.433: 93.8717% ( 7) 00:40:19.690 19.433 - 19.549: 93.9586% ( 7) 00:40:19.690 19.549 - 19.665: 94.0082% ( 4) 00:40:19.690 19.665 - 19.782: 94.0454% ( 3) 00:40:19.690 19.782 - 19.898: 94.0950% ( 4) 00:40:19.690 19.898 - 20.015: 94.1198% ( 2) 00:40:19.690 20.015 - 20.131: 94.1446% ( 2) 00:40:19.690 20.247 - 20.364: 94.1819% ( 3) 00:40:19.690 20.364 - 20.480: 94.2067% ( 2) 00:40:19.690 20.480 - 20.596: 94.2191% ( 1) 00:40:19.690 20.713 - 20.829: 94.2315% ( 1) 00:40:19.690 21.178 - 21.295: 94.2687% ( 3) 00:40:19.690 21.411 - 21.527: 94.2935% ( 2) 00:40:19.690 21.527 - 21.644: 94.3059% ( 1) 00:40:19.690 21.644 - 21.760: 94.3555% ( 4) 00:40:19.690 21.760 - 21.876: 94.4920% ( 11) 00:40:19.690 21.876 - 21.993: 94.7525% ( 21) 00:40:19.690 21.993 - 22.109: 95.0999% ( 28) 00:40:19.690 22.109 - 22.225: 95.4596% ( 29) 00:40:19.690 22.225 - 22.342: 95.9310% ( 38) 00:40:19.690 22.342 - 22.458: 96.3404% ( 33) 00:40:19.690 22.458 - 22.575: 96.7622% ( 34) 00:40:19.690 22.575 - 22.691: 96.9855% ( 18) 00:40:19.690 22.691 - 22.807: 97.2460% ( 21) 00:40:19.690 22.807 - 22.924: 97.4817% ( 19) 00:40:19.691 22.924 - 23.040: 97.8166% ( 27) 00:40:19.691 23.040 - 23.156: 98.2136% ( 32) 00:40:19.691 23.156 - 23.273: 98.4493% ( 19) 00:40:19.691 23.273 - 23.389: 98.7346% ( 23) 00:40:19.691 23.389 - 23.505: 98.8587% ( 10) 00:40:19.691 23.505 - 23.622: 98.9455% ( 7) 00:40:19.691 23.622 - 23.738: 99.0200% ( 6) 00:40:19.691 23.738 - 23.855: 99.0448% ( 2) 00:40:19.691 23.855 - 23.971: 99.0944% ( 4) 00:40:19.691 23.971 - 24.087: 99.1440% ( 4) 00:40:19.691 24.087 - 24.204: 99.1688% ( 2) 00:40:19.691 24.204 - 24.320: 99.1812% ( 1) 00:40:19.691 24.320 - 24.436: 99.1936% ( 1) 00:40:19.691 24.436 - 24.553: 99.2061% ( 1) 00:40:19.691 24.553 - 24.669: 99.2185% ( 1) 00:40:19.691 24.669 - 24.785: 99.2433% ( 2) 00:40:19.691 24.785 - 24.902: 99.2557% ( 1) 00:40:19.691 24.902 - 25.018: 99.2805% ( 2) 00:40:19.691 25.135 - 25.251: 99.2929% ( 1) 00:40:19.691 25.251 - 25.367: 99.3177% ( 2) 00:40:19.691 25.716 - 25.833: 99.3301% ( 1) 00:40:19.691 26.298 - 26.415: 99.3549% ( 2) 00:40:19.691 26.764 - 26.880: 99.3673% ( 1) 00:40:19.691 26.996 - 27.113: 99.3797% ( 1) 00:40:19.691 27.229 - 27.345: 99.3921% ( 1) 00:40:19.691 27.462 - 27.578: 99.4045% ( 1) 00:40:19.691 27.695 - 27.811: 99.4418% ( 3) 00:40:19.691 27.811 - 27.927: 99.4542% ( 1) 00:40:19.691 27.927 - 28.044: 99.4790% ( 2) 00:40:19.691 28.044 - 28.160: 99.5038% ( 2) 00:40:19.691 28.160 - 28.276: 99.5162% ( 1) 00:40:19.691 28.276 - 28.393: 99.5410% ( 2) 00:40:19.691 28.393 - 28.509: 99.5658% ( 2) 00:40:19.691 28.509 - 28.625: 99.5782% ( 1) 00:40:19.691 28.625 - 28.742: 99.6030% ( 2) 00:40:19.691 28.975 - 29.091: 99.6154% ( 1) 00:40:19.691 29.091 - 29.207: 99.6402% ( 2) 00:40:19.691 29.207 - 29.324: 99.6899% ( 4) 00:40:19.691 29.324 - 29.440: 99.7023% ( 1) 00:40:19.691 29.440 - 29.556: 99.7147% ( 1) 00:40:19.691 29.556 - 29.673: 99.7271% ( 1) 00:40:19.691 29.673 - 29.789: 99.7395% ( 1) 00:40:19.691 30.022 - 30.255: 99.7519% ( 1) 00:40:19.691 31.185 - 31.418: 99.7643% ( 1) 00:40:19.691 32.116 - 32.349: 99.7767% ( 1) 00:40:19.691 32.815 - 33.047: 99.7891% ( 1) 00:40:19.691 33.047 - 33.280: 99.8015% ( 1) 00:40:19.691 33.978 - 34.211: 99.8139% ( 1) 00:40:19.691 35.142 - 35.375: 99.8263% ( 1) 00:40:19.691 38.167 - 38.400: 99.8387% ( 1) 00:40:19.691 38.865 - 39.098: 99.8511% ( 1) 00:40:19.691 41.658 - 41.891: 99.8635% ( 1) 00:40:19.691 42.124 - 42.356: 99.8759% ( 1) 00:40:19.691 45.615 - 45.847: 99.9008% ( 2) 00:40:19.691 51.433 - 51.665: 99.9132% ( 1) 00:40:19.691 57.251 - 57.484: 99.9256% ( 1) 00:40:19.691 61.440 - 61.905: 99.9380% ( 1) 00:40:19.691 65.164 - 65.629: 99.9504% ( 1) 00:40:19.691 65.629 - 66.095: 99.9628% ( 1) 00:40:19.691 69.353 - 69.818: 99.9752% ( 1) 00:40:19.691 94.022 - 94.487: 99.9876% ( 1) 00:40:19.691 125.673 - 126.604: 100.0000% ( 1) 00:40:19.691 00:40:19.691 00:40:19.691 real 0m1.291s 00:40:19.691 user 0m1.095s 00:40:19.691 sys 0m0.129s 00:40:19.691 12:22:18 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:19.691 12:22:18 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:40:19.691 ************************************ 00:40:19.691 END TEST nvme_overhead 00:40:19.691 ************************************ 00:40:19.691 12:22:18 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:40:19.691 12:22:18 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:40:19.691 12:22:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:19.691 12:22:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:19.691 ************************************ 00:40:19.691 START TEST nvme_arbitration 00:40:19.691 ************************************ 00:40:19.691 12:22:18 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:40:23.037 Initializing NVMe Controllers 00:40:23.037 Attached to 0000:00:10.0 00:40:23.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:40:23.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:40:23.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:40:23.037 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:40:23.037 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:40:23.037 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:40:23.037 Initialization complete. Launching workers. 00:40:23.037 Starting thread on core 1 with urgent priority queue 00:40:23.037 Starting thread on core 2 with urgent priority queue 00:40:23.037 Starting thread on core 3 with urgent priority queue 00:40:23.037 Starting thread on core 0 with urgent priority queue 00:40:23.037 QEMU NVMe Ctrl (12340 ) core 0: 7868.33 IO/s 12.71 secs/100000 ios 00:40:23.037 QEMU NVMe Ctrl (12340 ) core 1: 7851.67 IO/s 12.74 secs/100000 ios 00:40:23.037 QEMU NVMe Ctrl (12340 ) core 2: 3731.00 IO/s 26.80 secs/100000 ios 00:40:23.037 QEMU NVMe Ctrl (12340 ) core 3: 3942.67 IO/s 25.36 secs/100000 ios 00:40:23.037 ======================================================== 00:40:23.037 00:40:23.037 00:40:23.037 real 0m3.345s 00:40:23.037 user 0m9.146s 00:40:23.037 sys 0m0.133s 00:40:23.037 12:22:21 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:23.037 12:22:21 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:40:23.037 ************************************ 00:40:23.037 END TEST nvme_arbitration 00:40:23.037 ************************************ 00:40:23.037 12:22:21 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:40:23.037 12:22:21 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:40:23.037 12:22:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:23.037 12:22:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:23.037 ************************************ 00:40:23.037 START TEST nvme_single_aen 00:40:23.037 ************************************ 00:40:23.037 12:22:21 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:40:23.296 Asynchronous Event Request test 00:40:23.296 Attached to 0000:00:10.0 00:40:23.296 Reset controller to setup AER completions for this process 00:40:23.296 Registering asynchronous event callbacks... 00:40:23.296 Getting orig temperature thresholds of all controllers 00:40:23.296 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:23.296 Setting all controllers temperature threshold low to trigger AER 00:40:23.296 Waiting for all controllers temperature threshold to be set lower 00:40:23.296 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:23.296 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:40:23.296 Waiting for all controllers to trigger AER and reset threshold 00:40:23.296 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:23.296 Cleaning up... 00:40:23.296 00:40:23.296 real 0m0.267s 00:40:23.296 user 0m0.094s 00:40:23.296 sys 0m0.087s 00:40:23.296 12:22:21 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:23.296 12:22:21 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:40:23.296 ************************************ 00:40:23.296 END TEST nvme_single_aen 00:40:23.296 ************************************ 00:40:23.296 12:22:21 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:40:23.296 12:22:21 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:23.296 12:22:21 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:23.296 12:22:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:23.296 ************************************ 00:40:23.296 START TEST nvme_doorbell_aers 00:40:23.296 ************************************ 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:23.296 12:22:21 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:23.296 12:22:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:23.296 12:22:22 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:23.296 12:22:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:40:23.296 12:22:22 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:40:23.554 [2024-07-21 12:22:22.286741] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179774) is not found. Dropping the request. 00:40:33.524 Executing: test_write_invalid_db 00:40:33.524 Waiting for AER completion... 00:40:33.524 Failure: test_write_invalid_db 00:40:33.524 00:40:33.524 Executing: test_invalid_db_write_overflow_sq 00:40:33.524 Waiting for AER completion... 00:40:33.524 Failure: test_invalid_db_write_overflow_sq 00:40:33.524 00:40:33.524 Executing: test_invalid_db_write_overflow_cq 00:40:33.524 Waiting for AER completion... 00:40:33.524 Failure: test_invalid_db_write_overflow_cq 00:40:33.524 00:40:33.524 00:40:33.524 real 0m10.104s 00:40:33.524 user 0m8.500s 00:40:33.524 sys 0m1.536s 00:40:33.524 12:22:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:33.524 12:22:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:40:33.524 ************************************ 00:40:33.524 END TEST nvme_doorbell_aers 00:40:33.524 ************************************ 00:40:33.524 12:22:32 nvme -- nvme/nvme.sh@97 -- # uname 00:40:33.524 12:22:32 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:40:33.524 12:22:32 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:40:33.524 12:22:32 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:40:33.524 12:22:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:33.524 12:22:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:33.524 ************************************ 00:40:33.524 START TEST nvme_multi_aen 00:40:33.524 ************************************ 00:40:33.524 12:22:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:40:33.782 [2024-07-21 12:22:32.394737] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179774) is not found. Dropping the request. 00:40:33.782 [2024-07-21 12:22:32.395066] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179774) is not found. Dropping the request. 00:40:33.782 [2024-07-21 12:22:32.395265] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179774) is not found. Dropping the request. 00:40:33.782 Child process pid: 179963 00:40:34.040 [Child] Asynchronous Event Request test 00:40:34.040 [Child] Attached to 0000:00:10.0 00:40:34.040 [Child] Registering asynchronous event callbacks... 00:40:34.040 [Child] Getting orig temperature thresholds of all controllers 00:40:34.040 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:34.040 [Child] Waiting for all controllers to trigger AER and reset threshold 00:40:34.040 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:34.040 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:34.040 [Child] Cleaning up... 00:40:34.040 Asynchronous Event Request test 00:40:34.040 Attached to 0000:00:10.0 00:40:34.040 Reset controller to setup AER completions for this process 00:40:34.040 Registering asynchronous event callbacks... 00:40:34.040 Getting orig temperature thresholds of all controllers 00:40:34.040 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:40:34.040 Setting all controllers temperature threshold low to trigger AER 00:40:34.040 Waiting for all controllers temperature threshold to be set lower 00:40:34.040 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:40:34.040 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:40:34.040 Waiting for all controllers to trigger AER and reset threshold 00:40:34.040 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:40:34.040 Cleaning up... 00:40:34.040 00:40:34.040 real 0m0.630s 00:40:34.040 user 0m0.225s 00:40:34.040 sys 0m0.228s 00:40:34.040 12:22:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:34.040 12:22:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:40:34.040 ************************************ 00:40:34.040 END TEST nvme_multi_aen 00:40:34.041 ************************************ 00:40:34.041 12:22:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:40:34.041 12:22:32 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:40:34.041 12:22:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:34.041 12:22:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:34.041 ************************************ 00:40:34.041 START TEST nvme_startup 00:40:34.041 ************************************ 00:40:34.041 12:22:32 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:40:34.298 Initializing NVMe Controllers 00:40:34.298 Attached to 0000:00:10.0 00:40:34.298 Initialization complete. 00:40:34.298 Time used:196702.922 (us). 00:40:34.298 00:40:34.298 real 0m0.266s 00:40:34.298 user 0m0.083s 00:40:34.298 sys 0m0.092s 00:40:34.298 12:22:33 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:34.298 12:22:33 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:40:34.298 ************************************ 00:40:34.298 END TEST nvme_startup 00:40:34.298 ************************************ 00:40:34.298 12:22:33 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:40:34.298 12:22:33 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:34.298 12:22:33 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:34.298 12:22:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:34.298 ************************************ 00:40:34.298 START TEST nvme_multi_secondary 00:40:34.298 ************************************ 00:40:34.298 12:22:33 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:40:34.298 12:22:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=180029 00:40:34.298 12:22:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:40:34.298 12:22:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=180030 00:40:34.298 12:22:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:40:34.298 12:22:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:40:37.580 Initializing NVMe Controllers 00:40:37.580 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:37.580 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:40:37.580 Initialization complete. Launching workers. 00:40:37.580 ======================================================== 00:40:37.580 Latency(us) 00:40:37.580 Device Information : IOPS MiB/s Average min max 00:40:37.580 PCIE (0000:00:10.0) NSID 1 from core 2: 13109.85 51.21 1220.22 153.57 24441.75 00:40:37.580 ======================================================== 00:40:37.580 Total : 13109.85 51.21 1220.22 153.57 24441.75 00:40:37.580 00:40:37.838 12:22:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 180029 00:40:37.838 Initializing NVMe Controllers 00:40:37.838 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:37.838 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:40:37.838 Initialization complete. Launching workers. 00:40:37.838 ======================================================== 00:40:37.838 Latency(us) 00:40:37.838 Device Information : IOPS MiB/s Average min max 00:40:37.838 PCIE (0000:00:10.0) NSID 1 from core 1: 31595.42 123.42 506.07 139.01 1819.63 00:40:37.838 ======================================================== 00:40:37.838 Total : 31595.42 123.42 506.07 139.01 1819.63 00:40:37.838 00:40:40.366 Initializing NVMe Controllers 00:40:40.366 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:40.366 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:40.366 Initialization complete. Launching workers. 00:40:40.366 ======================================================== 00:40:40.366 Latency(us) 00:40:40.366 Device Information : IOPS MiB/s Average min max 00:40:40.367 PCIE (0000:00:10.0) NSID 1 from core 0: 39904.53 155.88 400.63 130.16 5355.49 00:40:40.367 ======================================================== 00:40:40.367 Total : 39904.53 155.88 400.63 130.16 5355.49 00:40:40.367 00:40:40.367 12:22:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 180030 00:40:40.367 12:22:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=180099 00:40:40.367 12:22:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:40:40.367 12:22:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=180100 00:40:40.367 12:22:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:40:40.367 12:22:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:40:43.652 Initializing NVMe Controllers 00:40:43.652 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:43.652 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:40:43.652 Initialization complete. Launching workers. 00:40:43.652 ======================================================== 00:40:43.652 Latency(us) 00:40:43.652 Device Information : IOPS MiB/s Average min max 00:40:43.652 PCIE (0000:00:10.0) NSID 1 from core 0: 33593.33 131.22 475.96 108.51 2472.79 00:40:43.653 ======================================================== 00:40:43.653 Total : 33593.33 131.22 475.96 108.51 2472.79 00:40:43.653 00:40:43.653 Initializing NVMe Controllers 00:40:43.653 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:43.653 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:40:43.653 Initialization complete. Launching workers. 00:40:43.653 ======================================================== 00:40:43.653 Latency(us) 00:40:43.653 Device Information : IOPS MiB/s Average min max 00:40:43.653 PCIE (0000:00:10.0) NSID 1 from core 1: 35738.00 139.60 447.35 119.89 2775.32 00:40:43.653 ======================================================== 00:40:43.653 Total : 35738.00 139.60 447.35 119.89 2775.32 00:40:43.653 00:40:45.556 Initializing NVMe Controllers 00:40:45.556 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:40:45.556 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:40:45.556 Initialization complete. Launching workers. 00:40:45.556 ======================================================== 00:40:45.556 Latency(us) 00:40:45.556 Device Information : IOPS MiB/s Average min max 00:40:45.556 PCIE (0000:00:10.0) NSID 1 from core 2: 17730.18 69.26 901.95 119.41 20486.26 00:40:45.556 ======================================================== 00:40:45.556 Total : 17730.18 69.26 901.95 119.41 20486.26 00:40:45.556 00:40:45.556 12:22:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 180099 00:40:45.556 12:22:44 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 180100 00:40:45.556 00:40:45.556 real 0m11.099s 00:40:45.556 user 0m18.708s 00:40:45.556 sys 0m0.697s 00:40:45.556 12:22:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:45.556 12:22:44 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:40:45.556 ************************************ 00:40:45.556 END TEST nvme_multi_secondary 00:40:45.556 ************************************ 00:40:45.556 12:22:44 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:40:45.556 12:22:44 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/179345 ]] 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@1086 -- # kill 179345 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@1087 -- # wait 179345 00:40:45.556 [2024-07-21 12:22:44.300204] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179962) is not found. Dropping the request. 00:40:45.556 [2024-07-21 12:22:44.300364] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179962) is not found. Dropping the request. 00:40:45.556 [2024-07-21 12:22:44.300423] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179962) is not found. Dropping the request. 00:40:45.556 [2024-07-21 12:22:44.300484] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 179962) is not found. Dropping the request. 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:40:45.556 12:22:44 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:45.556 12:22:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:45.556 ************************************ 00:40:45.556 START TEST bdev_nvme_reset_stuck_adm_cmd 00:40:45.556 ************************************ 00:40:45.556 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:40:45.814 * Looking for test storage... 00:40:45.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:45.814 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=180247 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 180247 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 180247 ']' 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:45.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:45.815 12:22:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:45.815 [2024-07-21 12:22:44.628847] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:45.815 [2024-07-21 12:22:44.629075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180247 ] 00:40:46.073 [2024-07-21 12:22:44.835191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:46.073 [2024-07-21 12:22:44.933435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:46.073 [2024-07-21 12:22:44.933544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:40:46.073 [2024-07-21 12:22:44.933625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:40:46.073 [2024-07-21 12:22:44.933630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:47.008 nvme0n1 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_yAzwf.txt 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:47.008 true 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721564565 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=180274 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:40:47.008 12:22:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:48.931 [2024-07-21 12:22:47.720615] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:40:48.931 [2024-07-21 12:22:47.721136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:40:48.931 [2024-07-21 12:22:47.721231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:40:48.931 [2024-07-21 12:22:47.721316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:40:48.931 [2024-07-21 12:22:47.723407] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:48.931 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 180274 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 180274 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 180274 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:40:48.931 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_yAzwf.txt 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_yAzwf.txt 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 180247 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 180247 ']' 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 180247 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 180247 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:49.190 killing process with pid 180247 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 180247' 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 180247 00:40:49.190 12:22:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 180247 00:40:49.756 12:22:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:40:49.756 12:22:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:40:49.756 ************************************ 00:40:49.756 END TEST bdev_nvme_reset_stuck_adm_cmd 00:40:49.756 ************************************ 00:40:49.756 00:40:49.756 real 0m4.031s 00:40:49.756 user 0m14.254s 00:40:49.756 sys 0m0.727s 00:40:49.756 12:22:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:49.756 12:22:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:40:49.756 12:22:48 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:40:49.756 12:22:48 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:40:49.756 12:22:48 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:49.756 12:22:48 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:49.756 12:22:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:49.756 ************************************ 00:40:49.756 START TEST nvme_fio 00:40:49.756 ************************************ 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:49.756 12:22:48 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:40:49.756 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:40:50.014 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:40:50.014 12:22:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:40:50.273 12:22:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:40:50.273 12:22:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:40:50.273 12:22:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:40:50.531 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:40:50.531 fio-3.35 00:40:50.531 Starting 1 thread 00:40:53.834 00:40:53.834 test: (groupid=0, jobs=1): err= 0: pid=180407: Sun Jul 21 12:22:52 2024 00:40:53.834 read: IOPS=15.5k, BW=60.5MiB/s (63.4MB/s)(121MiB/2001msec) 00:40:53.834 slat (usec): min=3, max=100, avg= 6.24, stdev= 3.94 00:40:53.834 clat (usec): min=310, max=8792, avg=4111.39, stdev=391.05 00:40:53.834 lat (usec): min=315, max=8892, avg=4117.63, stdev=391.33 00:40:53.834 clat percentiles (usec): 00:40:53.834 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3851], 00:40:53.834 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:40:53.834 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4490], 95.00th=[ 4621], 00:40:53.834 | 99.00th=[ 4883], 99.50th=[ 5014], 99.90th=[ 6521], 99.95th=[ 8029], 00:40:53.834 | 99.99th=[ 8717] 00:40:53.834 bw ( KiB/s): min=59656, max=64992, per=99.93%, avg=61885.33, stdev=2774.08, samples=3 00:40:53.834 iops : min=14914, max=16248, avg=15471.33, stdev=693.52, samples=3 00:40:53.834 write: IOPS=15.5k, BW=60.5MiB/s (63.5MB/s)(121MiB/2001msec); 0 zone resets 00:40:53.834 slat (nsec): min=3957, max=74016, avg=6378.52, stdev=3937.09 00:40:53.834 clat (usec): min=401, max=8713, avg=4127.22, stdev=396.20 00:40:53.834 lat (usec): min=406, max=8735, avg=4133.59, stdev=396.46 00:40:53.834 clat percentiles (usec): 00:40:53.834 | 1.00th=[ 3097], 5.00th=[ 3490], 10.00th=[ 3687], 20.00th=[ 3851], 00:40:53.834 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:40:53.834 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4621], 00:40:53.834 | 99.00th=[ 4883], 99.50th=[ 5080], 99.90th=[ 6915], 99.95th=[ 8094], 00:40:53.834 | 99.99th=[ 8586] 00:40:53.834 bw ( KiB/s): min=58856, max=64200, per=99.20%, avg=61485.33, stdev=2673.02, samples=3 00:40:53.834 iops : min=14714, max=16050, avg=15371.33, stdev=668.26, samples=3 00:40:53.834 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:40:53.834 lat (msec) : 2=0.05%, 4=32.29%, 10=67.62% 00:40:53.834 cpu : usr=100.05%, sys=0.00%, ctx=8, majf=0, minf=39 00:40:53.834 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:40:53.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:53.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:53.834 issued rwts: total=30979,31006,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:53.834 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:53.834 00:40:53.834 Run status group 0 (all jobs): 00:40:53.834 READ: bw=60.5MiB/s (63.4MB/s), 60.5MiB/s-60.5MiB/s (63.4MB/s-63.4MB/s), io=121MiB (127MB), run=2001-2001msec 00:40:53.834 WRITE: bw=60.5MiB/s (63.5MB/s), 60.5MiB/s-60.5MiB/s (63.5MB/s-63.5MB/s), io=121MiB (127MB), run=2001-2001msec 00:40:53.834 ----------------------------------------------------- 00:40:53.834 Suppressions used: 00:40:53.834 count bytes template 00:40:53.834 1 32 /usr/src/fio/parse.c 00:40:53.834 ----------------------------------------------------- 00:40:53.834 00:40:53.834 12:22:52 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:40:53.834 12:22:52 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:40:53.834 00:40:53.834 real 0m3.972s 00:40:53.834 user 0m3.268s 00:40:53.834 sys 0m0.379s 00:40:53.834 12:22:52 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:53.834 12:22:52 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:40:53.834 ************************************ 00:40:53.834 END TEST nvme_fio 00:40:53.834 ************************************ 00:40:53.834 ************************************ 00:40:53.834 END TEST nvme 00:40:53.834 ************************************ 00:40:53.834 00:40:53.834 real 0m44.672s 00:40:53.834 user 1m58.357s 00:40:53.834 sys 0m8.422s 00:40:53.834 12:22:52 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:53.834 12:22:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:40:53.834 12:22:52 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:40:53.834 12:22:52 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:40:53.834 12:22:52 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:53.834 12:22:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:53.834 12:22:52 -- common/autotest_common.sh@10 -- # set +x 00:40:53.834 ************************************ 00:40:53.834 START TEST nvme_scc 00:40:53.834 ************************************ 00:40:53.834 12:22:52 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:40:53.835 * Looking for test storage... 00:40:53.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:53.835 12:22:52 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:40:53.835 12:22:52 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:53.835 12:22:52 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:53.835 12:22:52 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:53.835 12:22:52 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:53.835 12:22:52 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:53.835 12:22:52 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:53.835 12:22:52 nvme_scc -- paths/export.sh@5 -- # export PATH 00:40:53.835 12:22:52 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:40:53.835 12:22:52 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:40:53.835 12:22:52 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:53.835 12:22:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:40:53.835 12:22:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:40:53.835 12:22:52 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:40:53.835 12:22:52 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:40:54.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:54.401 Waiting for block devices as requested 00:40:54.401 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:40:54.401 12:22:53 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:40:54.401 12:22:53 nvme_scc -- scripts/common.sh@15 -- # local i 00:40:54.401 12:22:53 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:40:54.401 12:22:53 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:40:54.401 12:22:53 nvme_scc -- scripts/common.sh@24 -- # return 0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:40:54.401 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.402 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@18 -- # shift 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:40:54.403 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:40:54.404 12:22:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:40:54.404 12:22:53 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@206 -- # echo nvme0 00:40:54.405 12:22:53 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:40:54.405 12:22:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:40:54.405 12:22:53 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:40:54.405 12:22:53 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:40:54.969 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:40:54.969 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:40:56.344 12:22:55 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:40:56.344 12:22:55 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:40:56.344 12:22:55 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:56.344 12:22:55 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:40:56.344 ************************************ 00:40:56.344 START TEST nvme_simple_copy 00:40:56.344 ************************************ 00:40:56.344 12:22:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:40:56.602 Initializing NVMe Controllers 00:40:56.602 Attaching to 0000:00:10.0 00:40:56.602 Controller supports SCC. Attached to 0000:00:10.0 00:40:56.602 Namespace ID: 1 size: 5GB 00:40:56.602 Initialization complete. 00:40:56.602 00:40:56.602 Controller QEMU NVMe Ctrl (12340 ) 00:40:56.602 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:40:56.602 Namespace Block Size:4096 00:40:56.602 Writing LBAs 0 to 63 with Random Data 00:40:56.602 Copied LBAs from 0 - 63 to the Destination LBA 256 00:40:56.602 LBAs matching Written Data: 64 00:40:56.602 00:40:56.602 real 0m0.290s 00:40:56.602 user 0m0.108s 00:40:56.602 sys 0m0.083s 00:40:56.602 12:22:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:56.602 12:22:55 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:40:56.602 ************************************ 00:40:56.602 END TEST nvme_simple_copy 00:40:56.602 ************************************ 00:40:56.862 00:40:56.862 real 0m2.926s 00:40:56.862 user 0m0.765s 00:40:56.862 sys 0m2.058s 00:40:56.862 12:22:55 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:56.862 12:22:55 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:40:56.862 ************************************ 00:40:56.862 END TEST nvme_scc 00:40:56.862 ************************************ 00:40:56.862 12:22:55 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:40:56.862 12:22:55 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:40:56.862 12:22:55 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:40:56.862 12:22:55 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:40:56.862 12:22:55 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:40:56.862 12:22:55 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:40:56.862 12:22:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:56.862 12:22:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:56.862 12:22:55 -- common/autotest_common.sh@10 -- # set +x 00:40:56.862 ************************************ 00:40:56.862 START TEST nvme_rpc 00:40:56.862 ************************************ 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:40:56.862 * Looking for test storage... 00:40:56.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:56.862 12:22:55 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:56.862 12:22:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:40:56.862 12:22:55 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:40:56.862 12:22:55 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=180891 00:40:56.862 12:22:55 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:40:56.862 12:22:55 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:40:56.862 12:22:55 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 180891 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 180891 ']' 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:56.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:56.862 12:22:55 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:57.130 [2024-07-21 12:22:55.751156] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:57.130 [2024-07-21 12:22:55.751406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180891 ] 00:40:57.130 [2024-07-21 12:22:55.924244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:57.394 [2024-07-21 12:22:55.995601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:57.394 [2024-07-21 12:22:55.995619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:40:57.960 12:22:56 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:40:57.960 12:22:56 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:40:57.960 12:22:56 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:40:58.219 Nvme0n1 00:40:58.219 12:22:57 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:40:58.219 12:22:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:40:58.477 request: 00:40:58.477 { 00:40:58.477 "filename": "non_existing_file", 00:40:58.477 "bdev_name": "Nvme0n1", 00:40:58.477 "method": "bdev_nvme_apply_firmware", 00:40:58.477 "req_id": 1 00:40:58.477 } 00:40:58.477 Got JSON-RPC error response 00:40:58.477 response: 00:40:58.477 { 00:40:58.477 "code": -32603, 00:40:58.477 "message": "open file failed." 00:40:58.477 } 00:40:58.477 12:22:57 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:40:58.477 12:22:57 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:40:58.477 12:22:57 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:40:58.735 12:22:57 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:40:58.735 12:22:57 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 180891 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 180891 ']' 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 180891 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 180891 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:40:58.735 killing process with pid 180891 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 180891' 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@965 -- # kill 180891 00:40:58.735 12:22:57 nvme_rpc -- common/autotest_common.sh@970 -- # wait 180891 00:40:59.302 ************************************ 00:40:59.303 END TEST nvme_rpc 00:40:59.303 ************************************ 00:40:59.303 00:40:59.303 real 0m2.341s 00:40:59.303 user 0m4.654s 00:40:59.303 sys 0m0.567s 00:40:59.303 12:22:57 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:40:59.303 12:22:57 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:40:59.303 12:22:57 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:40:59.303 12:22:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:40:59.303 12:22:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:40:59.303 12:22:57 -- common/autotest_common.sh@10 -- # set +x 00:40:59.303 ************************************ 00:40:59.303 START TEST nvme_rpc_timeouts 00:40:59.303 ************************************ 00:40:59.303 12:22:57 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:40:59.303 * Looking for test storage... 00:40:59.303 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:40:59.303 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:40:59.303 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_180950 00:40:59.303 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_180950 00:40:59.303 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=180976 00:40:59.303 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:40:59.303 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:40:59.303 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 180976 00:40:59.303 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 180976 ']' 00:40:59.303 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:59.303 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:40:59.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:59.303 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:59.303 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:40:59.303 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:40:59.303 [2024-07-21 12:22:58.114990] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:40:59.303 [2024-07-21 12:22:58.115351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid180976 ] 00:40:59.561 [2024-07-21 12:22:58.286904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:59.561 [2024-07-21 12:22:58.351762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:40:59.561 [2024-07-21 12:22:58.351772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:00.497 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:00.497 12:22:58 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:41:00.497 Checking default timeout settings: 00:41:00.497 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:41:00.497 12:22:58 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:41:00.497 Making settings changes with rpc: 00:41:00.497 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:41:00.497 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:41:00.755 Check default vs. modified settings: 00:41:00.755 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:41:00.755 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_180950 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_180950 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:41:01.322 Setting action_on_timeout is changed as expected. 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_180950 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_180950 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:41:01.322 Setting timeout_us is changed as expected. 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_180950 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_180950 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:41:01.322 Setting timeout_admin_us is changed as expected. 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_180950 /tmp/settings_modified_180950 00:41:01.322 12:22:59 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 180976 00:41:01.322 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 180976 ']' 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 180976 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 180976 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:41:01.323 killing process with pid 180976 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 180976' 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 180976 00:41:01.323 12:22:59 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 180976 00:41:01.581 RPC TIMEOUT SETTING TEST PASSED. 00:41:01.581 12:23:00 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:41:01.581 ************************************ 00:41:01.581 END TEST nvme_rpc_timeouts 00:41:01.581 ************************************ 00:41:01.581 00:41:01.581 real 0m2.480s 00:41:01.581 user 0m4.909s 00:41:01.581 sys 0m0.620s 00:41:01.581 12:23:00 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:41:01.581 12:23:00 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:41:01.839 12:23:00 -- spdk/autotest.sh@243 -- # uname -s 00:41:01.839 12:23:00 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:41:01.839 12:23:00 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:41:01.839 12:23:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:41:01.839 12:23:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:41:01.839 12:23:00 -- common/autotest_common.sh@10 -- # set +x 00:41:01.839 ************************************ 00:41:01.839 START TEST sw_hotplug 00:41:01.839 ************************************ 00:41:01.839 12:23:00 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:41:01.839 * Looking for test storage... 00:41:01.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:41:01.839 12:23:00 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:41:02.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:41:02.098 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:41:03.475 12:23:01 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:41:03.475 12:23:01 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:41:03.475 12:23:01 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:41:03.475 12:23:01 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@230 -- # local class 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@15 -- # local i 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@325 -- # (( 1 )) 00:41:03.475 12:23:01 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 00:41:03.475 12:23:01 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=1 00:41:03.475 12:23:01 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:41:03.475 12:23:01 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:41:03.475 12:23:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:03.475 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:41:03.475 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:41:03.475 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=181264 00:41:03.475 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning 00:41:03.475 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:41:03.475 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:41:03.476 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:41:03.476 12:23:02 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:41:03.476 12:23:02 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:41:03.476 12:23:02 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:41:03.476 12:23:02 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:41:03.476 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:41:03.476 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:41:03.476 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:41:03.476 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:41:03.476 12:23:02 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:41:03.476 Initializing NVMe Controllers 00:41:03.476 Attaching to 0000:00:10.0 00:41:03.476 Attached to 0000:00:10.0 00:41:03.476 Initialization complete. Starting I/O... 00:41:03.476 QEMU NVMe Ctrl (12340 ): 2 I/Os completed (+2) 00:41:03.476 00:41:04.411 QEMU NVMe Ctrl (12340 ): 2156 I/Os completed (+2154) 00:41:04.411 00:41:05.785 QEMU NVMe Ctrl (12340 ): 5332 I/Os completed (+3176) 00:41:05.785 00:41:06.719 QEMU NVMe Ctrl (12340 ): 8888 I/Os completed (+3556) 00:41:06.719 00:41:07.652 QEMU NVMe Ctrl (12340 ): 12348 I/Os completed (+3460) 00:41:07.652 00:41:08.586 QEMU NVMe Ctrl (12340 ): 15841 I/Os completed (+3493) 00:41:08.586 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:09.519 [2024-07-21 12:23:08.051447] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:09.519 Controller removed: QEMU NVMe Ctrl (12340 ) 00:41:09.519 [2024-07-21 12:23:08.052945] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 [2024-07-21 12:23:08.053039] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 [2024-07-21 12:23:08.053065] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 [2024-07-21 12:23:08.053084] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:09.519 [2024-07-21 12:23:08.055467] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 [2024-07-21 12:23:08.055519] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 [2024-07-21 12:23:08.055542] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 [2024-07-21 12:23:08.055561] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:09.519 12:23:08 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:09.519 Attaching to 0000:00:10.0 00:41:09.519 Attached to 0000:00:10.0 00:41:09.519 QEMU NVMe Ctrl (12340 ): 81 I/Os completed (+81) 00:41:09.519 00:41:10.453 QEMU NVMe Ctrl (12340 ): 3305 I/Os completed (+3224) 00:41:10.453 00:41:11.829 QEMU NVMe Ctrl (12340 ): 6685 I/Os completed (+3380) 00:41:11.829 00:41:12.396 QEMU NVMe Ctrl (12340 ): 10149 I/Os completed (+3464) 00:41:12.396 00:41:13.783 QEMU NVMe Ctrl (12340 ): 13625 I/Os completed (+3476) 00:41:13.783 00:41:14.716 QEMU NVMe Ctrl (12340 ): 17081 I/Os completed (+3456) 00:41:14.716 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:15.647 [2024-07-21 12:23:14.233014] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:15.647 Controller removed: QEMU NVMe Ctrl (12340 ) 00:41:15.647 [2024-07-21 12:23:14.234404] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 [2024-07-21 12:23:14.234458] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 [2024-07-21 12:23:14.234482] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 [2024-07-21 12:23:14.234517] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:15.647 [2024-07-21 12:23:14.236694] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 [2024-07-21 12:23:14.236741] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 [2024-07-21 12:23:14.236763] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 [2024-07-21 12:23:14.236782] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:15.647 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:15.647 12:23:14 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:15.647 Attaching to 0000:00:10.0 00:41:15.647 Attached to 0000:00:10.0 00:41:16.580 QEMU NVMe Ctrl (12340 ): 2935 I/Os completed (+2935) 00:41:16.580 00:41:17.514 QEMU NVMe Ctrl (12340 ): 6355 I/Os completed (+3420) 00:41:17.514 00:41:18.446 QEMU NVMe Ctrl (12340 ): 9845 I/Os completed (+3490) 00:41:18.446 00:41:19.820 QEMU NVMe Ctrl (12340 ): 13357 I/Os completed (+3512) 00:41:19.820 00:41:20.753 QEMU NVMe Ctrl (12340 ): 16786 I/Os completed (+3429) 00:41:20.753 00:41:21.684 QEMU NVMe Ctrl (12340 ): 20207 I/Os completed (+3421) 00:41:21.684 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:21.684 [2024-07-21 12:23:20.417820] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:21.684 Controller removed: QEMU NVMe Ctrl (12340 ) 00:41:21.684 [2024-07-21 12:23:20.419265] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 [2024-07-21 12:23:20.419555] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 [2024-07-21 12:23:20.419692] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 [2024-07-21 12:23:20.420313] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:21.684 [2024-07-21 12:23:20.421807] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 [2024-07-21 12:23:20.421991] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 [2024-07-21 12:23:20.422046] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 [2024-07-21 12:23:20.422169] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:21.684 EAL: Cannot open sysfs resource 00:41:21.684 EAL: pci_scan_one(): cannot parse resource 00:41:21.684 EAL: Scan for (pci) bus failed. 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:21.684 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:21.941 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:21.941 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:21.941 12:23:20 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:21.941 Attaching to 0000:00:10.0 00:41:21.941 Attached to 0000:00:10.0 00:41:21.941 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:41:21.941 [2024-07-21 12:23:20.611515] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:41:28.527 12:23:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:41:28.527 12:23:26 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:28.527 12:23:26 sw_hotplug -- common/autotest_common.sh@714 -- # time=24.56 00:41:28.527 12:23:26 sw_hotplug -- common/autotest_common.sh@716 -- # echo 24.56 00:41:28.527 12:23:26 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=24.56 00:41:28.527 12:23:26 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.56 1 00:41:28.527 remove_attach_helper took 24.56s to complete (handling 1 nvme drive(s)) 12:23:26 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 181264 00:41:33.783 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (181264) - No such process 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 181264 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=181605 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:41:33.783 12:23:32 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 181605 00:41:33.783 12:23:32 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 181605 ']' 00:41:33.783 12:23:32 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:33.783 12:23:32 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:41:33.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:33.783 12:23:32 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:33.783 12:23:32 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:41:33.783 12:23:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:34.041 [2024-07-21 12:23:32.696213] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:41:34.041 [2024-07-21 12:23:32.696462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid181605 ] 00:41:34.041 [2024-07-21 12:23:32.857994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:34.298 [2024-07-21 12:23:32.918377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:41:34.864 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:41:34.864 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:34.864 Nvme00n1 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:34.864 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:34.864 [ 00:41:34.864 { 00:41:34.864 "name": "Nvme00n1", 00:41:34.864 "aliases": [ 00:41:34.864 "377f7893-250f-4e12-a6d2-1f975f1dc1f0" 00:41:34.864 ], 00:41:34.864 "product_name": "NVMe disk", 00:41:34.864 "block_size": 4096, 00:41:34.864 "num_blocks": 1310720, 00:41:34.864 "uuid": "377f7893-250f-4e12-a6d2-1f975f1dc1f0", 00:41:34.864 "assigned_rate_limits": { 00:41:34.864 "rw_ios_per_sec": 0, 00:41:34.864 "rw_mbytes_per_sec": 0, 00:41:34.864 "r_mbytes_per_sec": 0, 00:41:34.864 "w_mbytes_per_sec": 0 00:41:34.864 }, 00:41:34.864 "claimed": false, 00:41:34.864 "zoned": false, 00:41:34.864 "supported_io_types": { 00:41:34.864 "read": true, 00:41:34.864 "write": true, 00:41:34.864 "unmap": true, 00:41:34.864 "write_zeroes": true, 00:41:34.864 "flush": true, 00:41:34.864 "reset": true, 00:41:34.864 "compare": true, 00:41:34.864 "compare_and_write": false, 00:41:34.864 "abort": true, 00:41:34.864 "nvme_admin": true, 00:41:34.864 "nvme_io": true 00:41:34.864 }, 00:41:34.864 "driver_specific": { 00:41:34.864 "nvme": [ 00:41:34.864 { 00:41:34.864 "pci_address": "0000:00:10.0", 00:41:34.864 "trid": { 00:41:34.864 "trtype": "PCIe", 00:41:34.864 "traddr": "0000:00:10.0" 00:41:34.864 }, 00:41:34.864 "ctrlr_data": { 00:41:34.864 "cntlid": 0, 00:41:34.864 "vendor_id": "0x1b36", 00:41:34.864 "model_number": "QEMU NVMe Ctrl", 00:41:34.864 "serial_number": "12340", 00:41:34.864 "firmware_revision": "8.0.0", 00:41:34.864 "subnqn": "nqn.2019-08.org.qemu:12340", 00:41:34.864 "oacs": { 00:41:34.864 "security": 0, 00:41:34.864 "format": 1, 00:41:34.864 "firmware": 0, 00:41:34.864 "ns_manage": 1 00:41:34.864 }, 00:41:34.864 "multi_ctrlr": false, 00:41:34.864 "ana_reporting": false 00:41:34.864 }, 00:41:34.864 "vs": { 00:41:34.864 "nvme_version": "1.4" 00:41:34.864 }, 00:41:34.864 "ns_data": { 00:41:34.864 "id": 1, 00:41:34.864 "can_share": false 00:41:34.864 } 00:41:34.864 } 00:41:34.864 ], 00:41:34.864 "mp_policy": "active_passive" 00:41:34.864 } 00:41:34.864 } 00:41:34.864 ] 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:41:34.864 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:34.864 12:23:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:35.122 12:23:33 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:41:35.122 12:23:33 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:41:35.122 12:23:33 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:41:35.122 12:23:33 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:41:35.122 12:23:33 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:41:35.122 12:23:33 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:41:41.677 12:23:39 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:41.677 12:23:39 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:41.677 12:23:39 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:41.677 12:23:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:41:41.677 12:23:39 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:41:41.677 [2024-07-21 12:23:39.834368] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:41.677 [2024-07-21 12:23:39.835905] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:41.677 [2024-07-21 12:23:39.835957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:41.677 [2024-07-21 12:23:39.836007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:41.677 [2024-07-21 12:23:39.836041] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:41.677 [2024-07-21 12:23:39.836062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:41.677 [2024-07-21 12:23:39.836085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:41.677 [2024-07-21 12:23:39.836105] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:41.677 [2024-07-21 12:23:39.836128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:41.677 [2024-07-21 12:23:39.836160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:41.677 [2024-07-21 12:23:39.836183] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:41.677 [2024-07-21 12:23:39.836201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:41.677 [2024-07-21 12:23:39.836227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:46.940 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:41:46.940 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:41:46.940 12:23:45 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:46.940 12:23:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:46.940 12:23:45 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:41:47.197 12:23:45 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:41:53.766 12:23:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:41:53.766 12:23:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:41:53.766 12:23:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:41:53.766 12:23:51 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:41:53.766 12:23:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:41:53.766 12:23:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:41:53.766 12:23:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:41:53.766 12:23:51 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:41:53.766 12:23:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:41:53.766 12:23:52 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:41:53.766 12:23:52 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:41:53.766 12:23:52 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:41:53.766 [2024-07-21 12:23:52.034459] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:41:53.766 [2024-07-21 12:23:52.036246] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:53.766 [2024-07-21 12:23:52.036339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.766 [2024-07-21 12:23:52.036366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.766 [2024-07-21 12:23:52.036401] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:53.766 [2024-07-21 12:23:52.036438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.766 [2024-07-21 12:23:52.036465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.766 [2024-07-21 12:23:52.036484] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:53.766 [2024-07-21 12:23:52.036554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.766 [2024-07-21 12:23:52.036575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.766 [2024-07-21 12:23:52.036600] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:41:53.766 [2024-07-21 12:23:52.036621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:41:53.766 [2024-07-21 12:23:52.036647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:41:53.766 12:23:52 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:41:53.766 12:23:52 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:00.338 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:00.339 12:23:58 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:00.339 12:23:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:00.339 12:23:58 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:00.339 12:23:58 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:42:05.600 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:42:05.600 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:42:05.600 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:42:05.601 12:24:04 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:05.601 12:24:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:42:05.601 12:24:04 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:42:05.601 [2024-07-21 12:24:04.334567] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:05.601 [2024-07-21 12:24:04.336177] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:05.601 [2024-07-21 12:24:04.336224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:05.601 [2024-07-21 12:24:04.336253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:05.601 [2024-07-21 12:24:04.336292] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:05.601 [2024-07-21 12:24:04.336316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:05.601 [2024-07-21 12:24:04.336333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:05.601 [2024-07-21 12:24:04.336369] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:05.601 [2024-07-21 12:24:04.336392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:05.601 [2024-07-21 12:24:04.336408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:05.601 [2024-07-21 12:24:04.336429] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:05.601 [2024-07-21 12:24:04.336448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:05.601 [2024-07-21 12:24:04.336468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:42:05.601 12:24:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:12.184 12:24:10 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:12.184 12:24:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:12.184 12:24:10 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:12.184 12:24:10 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@714 -- # time=42.86 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@716 -- # echo 42.86 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=42.86 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.86 1 00:42:18.741 remove_attach_helper took 42.86s to complete (handling 1 nvme drive(s)) 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:42:18.741 12:24:16 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:42:18.741 12:24:16 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:42:24.002 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:24.002 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:42:24.002 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:42:24.002 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:42:24.002 12:24:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:24.002 [2024-07-21 12:24:22.722334] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:24.002 [2024-07-21 12:24:22.723805] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:24.002 [2024-07-21 12:24:22.723858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:24.002 [2024-07-21 12:24:22.723890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:24.002 [2024-07-21 12:24:22.723921] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:24.002 [2024-07-21 12:24:22.723946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:24.002 [2024-07-21 12:24:22.723984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:24.002 [2024-07-21 12:24:22.724004] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:24.002 [2024-07-21 12:24:22.724027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:24.002 [2024-07-21 12:24:22.724045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:24.002 [2024-07-21 12:24:22.724069] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:24.002 [2024-07-21 12:24:22.724090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:24.002 [2024-07-21 12:24:22.724140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:30.557 12:24:28 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:30.557 12:24:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:30.557 12:24:28 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:30.557 12:24:28 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:42:37.121 12:24:34 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:37.121 12:24:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:42:37.121 12:24:34 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:42:37.121 12:24:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:37.121 [2024-07-21 12:24:35.022441] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:37.121 [2024-07-21 12:24:35.023993] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:37.121 [2024-07-21 12:24:35.024055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:37.121 [2024-07-21 12:24:35.024076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:37.121 [2024-07-21 12:24:35.024096] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:37.121 [2024-07-21 12:24:35.024111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:37.121 [2024-07-21 12:24:35.024125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:37.121 [2024-07-21 12:24:35.024144] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:37.121 [2024-07-21 12:24:35.024159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:37.121 [2024-07-21 12:24:35.024185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:37.121 [2024-07-21 12:24:35.024203] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:37.121 [2024-07-21 12:24:35.024225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:37.121 [2024-07-21 12:24:35.024242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:42.381 12:24:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:42.381 12:24:40 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:42.381 12:24:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:42.381 12:24:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:42.381 12:24:40 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:42.381 12:24:41 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:42:48.935 12:24:47 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:48.935 12:24:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:42:48.935 12:24:47 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:42:48.935 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:42:48.935 [2024-07-21 12:24:47.222563] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:42:48.935 [2024-07-21 12:24:47.223914] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:48.935 [2024-07-21 12:24:47.223961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.935 [2024-07-21 12:24:47.223984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.935 [2024-07-21 12:24:47.224006] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:48.935 [2024-07-21 12:24:47.224024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.935 [2024-07-21 12:24:47.224056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.935 [2024-07-21 12:24:47.224071] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:48.935 [2024-07-21 12:24:47.224117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.935 [2024-07-21 12:24:47.224144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.935 [2024-07-21 12:24:47.224168] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:42:48.935 [2024-07-21 12:24:47.224195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:42:48.935 [2024-07-21 12:24:47.224218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:42:48.936 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:42:48.936 12:24:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:42:55.501 12:24:53 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:42:55.501 12:24:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:42:55.501 12:24:53 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 0 == 0 )) 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:42:55.501 12:24:53 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 6 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # true 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # bdfs=($(rpc_cmd bdev_get_bdevs | jq -r '.[].driver_specific.nvme[].pci_address' | sort)) 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # rpc_cmd bdev_get_bdevs 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # jq -r '.[].driver_specific.nvme[].pci_address' 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # sort 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]] 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@714 -- # time=42.87 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@716 -- # echo 42.87 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=42.87 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.87 1 00:43:00.762 remove_attach_helper took 42.87s to complete (handling 1 nvme drive(s)) 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:43:00.762 12:24:59 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 181605 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 181605 ']' 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 181605 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 181605 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:00.762 12:24:59 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:00.763 killing process with pid 181605 00:43:00.763 12:24:59 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 181605' 00:43:00.763 12:24:59 sw_hotplug -- common/autotest_common.sh@965 -- # kill 181605 00:43:00.763 12:24:59 sw_hotplug -- common/autotest_common.sh@970 -- # wait 181605 00:43:01.330 00:43:01.330 real 1m59.536s 00:43:01.330 user 1m35.666s 00:43:01.330 sys 0m13.842s 00:43:01.330 12:25:00 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:01.330 12:25:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:43:01.330 ************************************ 00:43:01.330 END TEST sw_hotplug 00:43:01.330 ************************************ 00:43:01.330 12:25:00 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:43:01.330 12:25:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:43:01.330 12:25:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:01.330 12:25:00 -- common/autotest_common.sh@10 -- # set +x 00:43:01.330 12:25:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:43:01.330 12:25:00 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:43:01.330 12:25:00 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:43:01.330 12:25:00 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:43:01.330 12:25:00 -- spdk/autotest.sh@375 -- # [[ 1 -eq 1 ]] 00:43:01.330 12:25:00 -- spdk/autotest.sh@376 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:43:01.330 12:25:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:43:01.330 12:25:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:01.330 12:25:00 -- common/autotest_common.sh@10 -- # set +x 00:43:01.330 ************************************ 00:43:01.330 START TEST blockdev_raid5f 00:43:01.330 ************************************ 00:43:01.330 12:25:00 blockdev_raid5f -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:43:01.330 * Looking for test storage... 00:43:01.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@674 -- # uname -s 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@682 -- # test_type=raid5f 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@683 -- # crypto_device= 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@684 -- # dek= 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@685 -- # env_ctx= 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == bdev ]] 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@690 -- # [[ raid5f == crypto_* ]] 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=182684 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 182684 00:43:01.589 12:25:00 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:43:01.589 12:25:00 blockdev_raid5f -- common/autotest_common.sh@827 -- # '[' -z 182684 ']' 00:43:01.589 12:25:00 blockdev_raid5f -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:01.589 12:25:00 blockdev_raid5f -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:01.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:01.589 12:25:00 blockdev_raid5f -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:01.589 12:25:00 blockdev_raid5f -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:01.589 12:25:00 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:01.589 [2024-07-21 12:25:00.286978] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:01.589 [2024-07-21 12:25:00.287229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182684 ] 00:43:01.853 [2024-07-21 12:25:00.458943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:01.853 [2024-07-21 12:25:00.533003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@860 -- # return 0 00:43:02.448 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:43:02.448 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@726 -- # setup_raid5f_conf 00:43:02.448 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@280 -- # rpc_cmd 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:02.448 Malloc0 00:43:02.448 Malloc1 00:43:02.448 Malloc2 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.448 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.448 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@740 -- # cat 00:43:02.448 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.448 12:25:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@559 -- # xtrace_disable 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@749 -- # jq -r .name 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f4c2369a-fa44-452b-85ab-7a4499654304"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f4c2369a-fa44-452b-85ab-7a4499654304",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f4c2369a-fa44-452b-85ab-7a4499654304",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9be4c2c6-cdc4-418d-b1d1-f1729c9bfb9c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "cc0538a1-2eff-49cd-b2e9-e7e5728b4b91",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f6534337-8c90-41b9-a1e1-83fd27d5b302",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@752 -- # hello_world_bdev=raid5f 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:43:02.707 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@754 -- # killprocess 182684 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@946 -- # '[' -z 182684 ']' 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@950 -- # kill -0 182684 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@951 -- # uname 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 182684 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:02.707 killing process with pid 182684 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@964 -- # echo 'killing process with pid 182684' 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@965 -- # kill 182684 00:43:02.707 12:25:01 blockdev_raid5f -- common/autotest_common.sh@970 -- # wait 182684 00:43:03.274 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:03.274 12:25:01 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:43:03.274 12:25:01 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:43:03.274 12:25:01 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:03.274 12:25:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:03.274 ************************************ 00:43:03.274 START TEST bdev_hello_world 00:43:03.274 ************************************ 00:43:03.274 12:25:01 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:43:03.274 [2024-07-21 12:25:01.975469] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:03.274 [2024-07-21 12:25:01.975632] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182736 ] 00:43:03.274 [2024-07-21 12:25:02.126804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:03.531 [2024-07-21 12:25:02.182034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:03.531 [2024-07-21 12:25:02.391644] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:43:03.531 [2024-07-21 12:25:02.391740] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:43:03.531 [2024-07-21 12:25:02.391785] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:43:03.531 [2024-07-21 12:25:02.392206] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:43:03.531 [2024-07-21 12:25:02.392398] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:43:03.531 [2024-07-21 12:25:02.392459] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:43:03.531 [2024-07-21 12:25:02.392552] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:43:03.531 00:43:03.531 [2024-07-21 12:25:02.392624] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:43:03.789 00:43:03.789 real 0m0.708s 00:43:03.789 user 0m0.400s 00:43:03.789 sys 0m0.194s 00:43:03.789 12:25:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:03.789 12:25:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:43:03.789 ************************************ 00:43:03.789 END TEST bdev_hello_world 00:43:03.789 ************************************ 00:43:04.048 12:25:02 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:43:04.048 12:25:02 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:43:04.048 12:25:02 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:04.048 12:25:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:04.048 ************************************ 00:43:04.048 START TEST bdev_bounds 00:43:04.048 ************************************ 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=182761 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 182761' 00:43:04.048 Process bdevio pid: 182761 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 182761 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 182761 ']' 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:04.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:04.048 12:25:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:04.048 [2024-07-21 12:25:02.766671] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:04.048 [2024-07-21 12:25:02.766937] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid182761 ] 00:43:04.310 [2024-07-21 12:25:02.944552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:43:04.310 [2024-07-21 12:25:03.002747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:04.310 [2024-07-21 12:25:03.002888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:43:04.310 [2024-07-21 12:25:03.002890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:04.875 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:04.875 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:43:04.875 12:25:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:43:05.133 I/O targets: 00:43:05.133 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:43:05.133 00:43:05.133 00:43:05.133 CUnit - A unit testing framework for C - Version 2.1-3 00:43:05.133 http://cunit.sourceforge.net/ 00:43:05.133 00:43:05.133 00:43:05.133 Suite: bdevio tests on: raid5f 00:43:05.133 Test: blockdev write read block ...passed 00:43:05.133 Test: blockdev write zeroes read block ...passed 00:43:05.133 Test: blockdev write zeroes read no split ...passed 00:43:05.133 Test: blockdev write zeroes read split ...passed 00:43:05.133 Test: blockdev write zeroes read split partial ...passed 00:43:05.133 Test: blockdev reset ...passed 00:43:05.133 Test: blockdev write read 8 blocks ...passed 00:43:05.133 Test: blockdev write read size > 128k ...passed 00:43:05.133 Test: blockdev write read invalid size ...passed 00:43:05.133 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:43:05.133 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:43:05.133 Test: blockdev write read max offset ...passed 00:43:05.133 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:43:05.133 Test: blockdev writev readv 8 blocks ...passed 00:43:05.133 Test: blockdev writev readv 30 x 1block ...passed 00:43:05.133 Test: blockdev writev readv block ...passed 00:43:05.133 Test: blockdev writev readv size > 128k ...passed 00:43:05.133 Test: blockdev writev readv size > 128k in two iovs ...passed 00:43:05.133 Test: blockdev comparev and writev ...passed 00:43:05.133 Test: blockdev nvme passthru rw ...passed 00:43:05.133 Test: blockdev nvme passthru vendor specific ...passed 00:43:05.133 Test: blockdev nvme admin passthru ...passed 00:43:05.133 Test: blockdev copy ...passed 00:43:05.133 00:43:05.133 Run Summary: Type Total Ran Passed Failed Inactive 00:43:05.133 suites 1 1 n/a 0 0 00:43:05.133 tests 23 23 23 0 0 00:43:05.133 asserts 130 130 130 0 n/a 00:43:05.133 00:43:05.133 Elapsed time = 0.324 seconds 00:43:05.133 0 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 182761 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 182761 ']' 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 182761 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 182761 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:05.133 killing process with pid 182761 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 182761' 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@965 -- # kill 182761 00:43:05.133 12:25:03 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # wait 182761 00:43:05.700 12:25:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:43:05.700 00:43:05.700 real 0m1.655s 00:43:05.700 user 0m4.054s 00:43:05.700 sys 0m0.353s 00:43:05.700 12:25:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:05.700 12:25:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:43:05.700 ************************************ 00:43:05.700 END TEST bdev_bounds 00:43:05.700 ************************************ 00:43:05.700 12:25:04 blockdev_raid5f -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:43:05.700 12:25:04 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:43:05.700 12:25:04 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:05.700 12:25:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:05.700 ************************************ 00:43:05.700 START TEST bdev_nbd 00:43:05.700 ************************************ 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('raid5f') 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=1 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=1 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0') 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('raid5f') 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=182818 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 182818 /var/tmp/spdk-nbd.sock 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 182818 ']' 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:43:05.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:43:05.700 12:25:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:05.700 [2024-07-21 12:25:04.477988] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:05.700 [2024-07-21 12:25:04.478251] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:05.960 [2024-07-21 12:25:04.644717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:05.960 [2024-07-21 12:25:04.713418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:06.893 1+0 records in 00:43:06.893 1+0 records out 00:43:06.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492344 s, 8.3 MB/s 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:43:06.893 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:06.894 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:43:06.894 12:25:05 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:43:06.894 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:43:06.894 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:43:06.894 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:43:07.151 { 00:43:07.151 "nbd_device": "/dev/nbd0", 00:43:07.151 "bdev_name": "raid5f" 00:43:07.151 } 00:43:07.151 ]' 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:43:07.151 { 00:43:07.151 "nbd_device": "/dev/nbd0", 00:43:07.151 "bdev_name": "raid5f" 00:43:07.151 } 00:43:07.151 ]' 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:07.151 12:25:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.409 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:07.667 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:43:07.925 /dev/nbd0 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:43:07.925 1+0 records in 00:43:07.925 1+0 records out 00:43:07.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298692 s, 13.7 MB/s 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:07.925 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:08.183 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:43:08.183 { 00:43:08.183 "nbd_device": "/dev/nbd0", 00:43:08.183 "bdev_name": "raid5f" 00:43:08.183 } 00:43:08.183 ]' 00:43:08.183 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:08.183 12:25:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:43:08.183 { 00:43:08.183 "nbd_device": "/dev/nbd0", 00:43:08.183 "bdev_name": "raid5f" 00:43:08.183 } 00:43:08.183 ]' 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:43:08.183 256+0 records in 00:43:08.183 256+0 records out 00:43:08.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723907 s, 145 MB/s 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:43:08.183 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:43:08.441 256+0 records in 00:43:08.441 256+0 records out 00:43:08.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269253 s, 38.9 MB/s 00:43:08.441 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:43:08.441 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:43:08.441 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:43:08.441 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:08.442 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:08.700 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:43:08.959 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:43:09.217 malloc_lvol_verify 00:43:09.217 12:25:07 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:43:09.474 1aca13df-c542-4a6b-9c47-37561edfd4d5 00:43:09.474 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:43:09.474 b02c790e-6736-4d8e-b122-a466bace848c 00:43:09.474 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:43:09.732 /dev/nbd0 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:43:09.990 mke2fs 1.46.5 (30-Dec-2021) 00:43:09.990 00:43:09.990 Filesystem too small for a journal 00:43:09.990 Discarding device blocks: 0/1024 done 00:43:09.990 Creating filesystem with 1024 4k blocks and 1024 inodes 00:43:09.990 00:43:09.990 Allocating group tables: 0/1 done 00:43:09.990 Writing inode tables: 0/1 done 00:43:09.990 Writing superblocks and filesystem accounting information: 0/1 done 00:43:09.990 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:43:09.990 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 182818 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 182818 ']' 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 182818 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 182818 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:43:10.248 killing process with pid 182818 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 182818' 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@965 -- # kill 182818 00:43:10.248 12:25:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # wait 182818 00:43:10.507 12:25:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:43:10.507 00:43:10.507 real 0m4.862s 00:43:10.507 user 0m7.404s 00:43:10.507 sys 0m1.110s 00:43:10.507 12:25:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:10.507 12:25:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:43:10.507 ************************************ 00:43:10.507 END TEST bdev_nbd 00:43:10.507 ************************************ 00:43:10.507 12:25:09 blockdev_raid5f -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:43:10.507 12:25:09 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = nvme ']' 00:43:10.507 12:25:09 blockdev_raid5f -- bdev/blockdev.sh@764 -- # '[' raid5f = gpt ']' 00:43:10.507 12:25:09 blockdev_raid5f -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:43:10.507 12:25:09 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:43:10.507 12:25:09 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:10.507 12:25:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:10.507 ************************************ 00:43:10.507 START TEST bdev_fio 00:43:10.507 ************************************ 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:43:10.507 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:43:10.507 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid5f]' 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid5f 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:10.766 ************************************ 00:43:10.766 START TEST bdev_fio_rw_verify 00:43:10.766 ************************************ 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:43:10.766 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]] 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:43:10.767 12:25:09 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:43:10.767 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:43:10.767 fio-3.35 00:43:10.767 Starting 1 thread 00:43:22.965 00:43:22.965 job_raid5f: (groupid=0, jobs=1): err= 0: pid=183047: Sun Jul 21 12:25:20 2024 00:43:22.965 read: IOPS=11.7k, BW=45.7MiB/s (47.9MB/s)(457MiB/10001msec) 00:43:22.965 slat (usec): min=18, max=726, avg=20.54, stdev= 3.66 00:43:22.965 clat (usec): min=12, max=1075, avg=137.63, stdev=50.07 00:43:22.965 lat (usec): min=33, max=1142, avg=158.16, stdev=51.09 00:43:22.965 clat percentiles (usec): 00:43:22.965 | 50.000th=[ 143], 99.000th=[ 262], 99.900th=[ 330], 99.990th=[ 355], 00:43:22.965 | 99.999th=[ 449] 00:43:22.965 write: IOPS=12.3k, BW=47.9MiB/s (50.2MB/s)(472MiB/9866msec); 0 zone resets 00:43:22.965 slat (usec): min=9, max=252, avg=17.54, stdev= 3.74 00:43:22.965 clat (usec): min=61, max=1927, avg=312.06, stdev=52.70 00:43:22.965 lat (usec): min=77, max=1945, avg=329.60, stdev=54.60 00:43:22.965 clat percentiles (usec): 00:43:22.965 | 50.000th=[ 314], 99.000th=[ 506], 99.900th=[ 857], 99.990th=[ 1106], 00:43:22.965 | 99.999th=[ 1909] 00:43:22.965 bw ( KiB/s): min=42328, max=50760, per=99.04%, avg=48549.89, stdev=2532.13, samples=19 00:43:22.965 iops : min=10582, max=12690, avg=12137.47, stdev=633.03, samples=19 00:43:22.965 lat (usec) : 20=0.01%, 50=0.01%, 100=11.86%, 250=40.54%, 500=47.06% 00:43:22.965 lat (usec) : 750=0.47%, 1000=0.05% 00:43:22.965 lat (msec) : 2=0.01% 00:43:22.965 cpu : usr=99.56%, sys=0.39%, ctx=76, majf=0, minf=11373 00:43:22.965 IO depths : 1=7.7%, 2=20.0%, 4=55.0%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:22.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.965 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:22.965 issued rwts: total=116910,120902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:22.965 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:22.965 00:43:22.965 Run status group 0 (all jobs): 00:43:22.965 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=457MiB (479MB), run=10001-10001msec 00:43:22.965 WRITE: bw=47.9MiB/s (50.2MB/s), 47.9MiB/s-47.9MiB/s (50.2MB/s-50.2MB/s), io=472MiB (495MB), run=9866-9866msec 00:43:22.965 ----------------------------------------------------- 00:43:22.965 Suppressions used: 00:43:22.965 count bytes template 00:43:22.965 1 7 /usr/src/fio/parse.c 00:43:22.965 366 35136 /usr/src/fio/iolog.c 00:43:22.965 1 904 libcrypto.so 00:43:22.965 ----------------------------------------------------- 00:43:22.965 00:43:22.965 00:43:22.965 real 0m11.263s 00:43:22.965 user 0m11.949s 00:43:22.965 sys 0m0.518s 00:43:22.965 12:25:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:22.965 12:25:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:43:22.965 ************************************ 00:43:22.965 END TEST bdev_fio_rw_verify 00:43:22.965 ************************************ 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "f4c2369a-fa44-452b-85ab-7a4499654304"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "f4c2369a-fa44-452b-85ab-7a4499654304",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "f4c2369a-fa44-452b-85ab-7a4499654304",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9be4c2c6-cdc4-418d-b1d1-f1729c9bfb9c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "cc0538a1-2eff-49cd-b2e9-e7e5728b4b91",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "f6534337-8c90-41b9-a1e1-83fd27d5b302",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:43:22.966 /home/vagrant/spdk_repo/spdk 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:43:22.966 00:43:22.966 real 0m11.441s 00:43:22.966 user 0m12.067s 00:43:22.966 sys 0m0.578s 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:22.966 12:25:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:43:22.966 ************************************ 00:43:22.966 END TEST bdev_fio 00:43:22.966 ************************************ 00:43:22.966 12:25:20 blockdev_raid5f -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:43:22.966 12:25:20 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:22.966 12:25:20 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:43:22.966 12:25:20 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:22.966 12:25:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:22.966 ************************************ 00:43:22.966 START TEST bdev_verify 00:43:22.966 ************************************ 00:43:22.966 12:25:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:43:22.966 [2024-07-21 12:25:20.893254] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:22.966 [2024-07-21 12:25:20.893505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183198 ] 00:43:22.966 [2024-07-21 12:25:21.063518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:22.966 [2024-07-21 12:25:21.143634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:22.966 [2024-07-21 12:25:21.143650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:22.966 Running I/O for 5 seconds... 00:43:28.232 00:43:28.232 Latency(us) 00:43:28.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:28.232 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:43:28.232 Verification LBA range: start 0x0 length 0x2000 00:43:28.232 raid5f : 5.01 6206.42 24.24 0.00 0.00 31071.80 320.23 26571.87 00:43:28.232 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:43:28.232 Verification LBA range: start 0x2000 length 0x2000 00:43:28.232 raid5f : 5.01 6154.55 24.04 0.00 0.00 31721.89 208.52 26691.03 00:43:28.232 =================================================================================================================== 00:43:28.232 Total : 12360.97 48.29 0.00 0.00 31395.66 208.52 26691.03 00:43:28.232 00:43:28.232 real 0m5.909s 00:43:28.232 user 0m10.966s 00:43:28.232 sys 0m0.300s 00:43:28.232 12:25:26 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:28.232 12:25:26 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:43:28.232 ************************************ 00:43:28.232 END TEST bdev_verify 00:43:28.232 ************************************ 00:43:28.232 12:25:26 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:28.232 12:25:26 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:43:28.232 12:25:26 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:28.232 12:25:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:28.232 ************************************ 00:43:28.232 START TEST bdev_verify_big_io 00:43:28.232 ************************************ 00:43:28.232 12:25:26 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:43:28.232 [2024-07-21 12:25:26.849739] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:28.232 [2024-07-21 12:25:26.849922] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183287 ] 00:43:28.232 [2024-07-21 12:25:27.005174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:43:28.232 [2024-07-21 12:25:27.076266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:43:28.232 [2024-07-21 12:25:27.076283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:28.490 Running I/O for 5 seconds... 00:43:33.757 00:43:33.757 Latency(us) 00:43:33.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:33.757 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:43:33.757 Verification LBA range: start 0x0 length 0x200 00:43:33.757 raid5f : 5.17 466.17 29.14 0.00 0.00 6972214.47 226.21 295507.78 00:43:33.757 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:43:33.757 Verification LBA range: start 0x200 length 0x200 00:43:33.757 raid5f : 5.25 459.65 28.73 0.00 0.00 6794759.07 171.29 310759.80 00:43:33.757 =================================================================================================================== 00:43:33.757 Total : 925.82 57.86 0.00 0.00 6883505.16 171.29 310759.80 00:43:34.322 00:43:34.322 real 0m6.109s 00:43:34.322 user 0m11.395s 00:43:34.322 sys 0m0.293s 00:43:34.322 12:25:32 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:34.322 12:25:32 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:43:34.322 ************************************ 00:43:34.322 END TEST bdev_verify_big_io 00:43:34.322 ************************************ 00:43:34.322 12:25:32 blockdev_raid5f -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:34.322 12:25:32 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:43:34.322 12:25:32 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:34.322 12:25:32 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:34.322 ************************************ 00:43:34.322 START TEST bdev_write_zeroes 00:43:34.322 ************************************ 00:43:34.322 12:25:32 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:34.322 [2024-07-21 12:25:33.030320] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:34.322 [2024-07-21 12:25:33.030561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183378 ] 00:43:34.580 [2024-07-21 12:25:33.196870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:34.580 [2024-07-21 12:25:33.275341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:34.838 Running I/O for 1 seconds... 00:43:35.774 00:43:35.774 Latency(us) 00:43:35.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:35.774 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:43:35.774 raid5f : 1.00 27116.14 105.92 0.00 0.00 4704.80 1414.98 5630.14 00:43:35.774 =================================================================================================================== 00:43:35.774 Total : 27116.14 105.92 0.00 0.00 4704.80 1414.98 5630.14 00:43:36.032 00:43:36.032 real 0m1.905s 00:43:36.032 user 0m1.457s 00:43:36.032 sys 0m0.321s 00:43:36.032 12:25:34 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:36.032 12:25:34 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:43:36.032 ************************************ 00:43:36.032 END TEST bdev_write_zeroes 00:43:36.032 ************************************ 00:43:36.290 12:25:34 blockdev_raid5f -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:36.290 12:25:34 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:43:36.290 12:25:34 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:36.290 12:25:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:36.290 ************************************ 00:43:36.290 START TEST bdev_json_nonenclosed 00:43:36.290 ************************************ 00:43:36.290 12:25:34 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:36.290 [2024-07-21 12:25:34.985269] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:36.290 [2024-07-21 12:25:34.986066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183429 ] 00:43:36.290 [2024-07-21 12:25:35.152225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:36.548 [2024-07-21 12:25:35.222096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:36.548 [2024-07-21 12:25:35.222244] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:43:36.548 [2024-07-21 12:25:35.222294] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:36.548 [2024-07-21 12:25:35.222326] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:36.548 00:43:36.548 real 0m0.417s 00:43:36.548 user 0m0.204s 00:43:36.548 sys 0m0.113s 00:43:36.548 12:25:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:36.548 12:25:35 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:43:36.548 ************************************ 00:43:36.548 END TEST bdev_json_nonenclosed 00:43:36.548 ************************************ 00:43:36.548 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:36.548 12:25:35 blockdev_raid5f -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:43:36.548 12:25:35 blockdev_raid5f -- common/autotest_common.sh@1103 -- # xtrace_disable 00:43:36.548 12:25:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:36.548 ************************************ 00:43:36.548 START TEST bdev_json_nonarray 00:43:36.548 ************************************ 00:43:36.548 12:25:35 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:43:36.806 [2024-07-21 12:25:35.454239] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:43:36.806 [2024-07-21 12:25:35.454485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid183452 ] 00:43:36.806 [2024-07-21 12:25:35.620664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:37.064 [2024-07-21 12:25:35.704511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:43:37.064 [2024-07-21 12:25:35.704677] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:43:37.064 [2024-07-21 12:25:35.704729] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:43:37.064 [2024-07-21 12:25:35.704763] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:43:37.064 00:43:37.064 real 0m0.437s 00:43:37.064 user 0m0.220s 00:43:37.064 sys 0m0.117s 00:43:37.064 12:25:35 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:37.064 12:25:35 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:43:37.064 ************************************ 00:43:37.064 END TEST bdev_json_nonarray 00:43:37.064 ************************************ 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@787 -- # [[ raid5f == bdev ]] 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@794 -- # [[ raid5f == gpt ]] 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@798 -- # [[ raid5f == crypto_sw ]] 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@811 -- # cleanup 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:43:37.064 12:25:35 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:43:37.064 00:43:37.064 real 0m35.772s 00:43:37.064 user 0m50.301s 00:43:37.064 sys 0m4.137s 00:43:37.064 12:25:35 blockdev_raid5f -- common/autotest_common.sh@1122 -- # xtrace_disable 00:43:37.064 12:25:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:43:37.064 ************************************ 00:43:37.064 END TEST blockdev_raid5f 00:43:37.064 ************************************ 00:43:37.064 12:25:35 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:43:37.064 12:25:35 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:43:37.064 12:25:35 -- common/autotest_common.sh@720 -- # xtrace_disable 00:43:37.064 12:25:35 -- common/autotest_common.sh@10 -- # set +x 00:43:37.323 12:25:35 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:43:37.323 12:25:35 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:43:37.323 12:25:35 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:43:37.323 12:25:35 -- common/autotest_common.sh@10 -- # set +x 00:43:38.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:38.721 Waiting for block devices as requested 00:43:38.721 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:43:39.286 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev 00:43:39.286 Cleaning 00:43:39.286 Removing: /var/run/dpdk/spdk0/config 00:43:39.286 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:39.286 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:39.286 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:39.286 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:39.286 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:39.286 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:39.286 Removing: /dev/shm/spdk_tgt_trace.pid123442 00:43:39.286 Removing: /var/run/dpdk/spdk0 00:43:39.286 Removing: /var/run/dpdk/spdk_pid123261 00:43:39.286 Removing: /var/run/dpdk/spdk_pid123442 00:43:39.286 Removing: /var/run/dpdk/spdk_pid123653 00:43:39.286 Removing: /var/run/dpdk/spdk_pid123756 00:43:39.286 Removing: /var/run/dpdk/spdk_pid123789 00:43:39.286 Removing: /var/run/dpdk/spdk_pid123911 00:43:39.286 Removing: /var/run/dpdk/spdk_pid123936 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124066 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124317 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124483 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124558 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124645 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124747 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124829 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124880 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124918 00:43:39.286 Removing: /var/run/dpdk/spdk_pid124991 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125095 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125603 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125658 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125712 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125735 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125809 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125830 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125899 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125920 00:43:39.286 Removing: /var/run/dpdk/spdk_pid125979 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126002 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126047 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126070 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126209 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126254 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126295 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126377 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126449 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126481 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126564 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126615 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126654 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126706 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126754 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126798 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126844 00:43:39.286 Removing: /var/run/dpdk/spdk_pid126895 00:43:39.544 Removing: /var/run/dpdk/spdk_pid126934 00:43:39.544 Removing: /var/run/dpdk/spdk_pid126985 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127025 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127076 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127128 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127166 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127217 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127263 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127384 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127433 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127487 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127539 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127577 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127657 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127778 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127933 00:43:39.544 Removing: /var/run/dpdk/spdk_pid127999 00:43:39.544 Removing: /var/run/dpdk/spdk_pid128037 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129246 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129443 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129631 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129741 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129861 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129911 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129949 00:43:39.544 Removing: /var/run/dpdk/spdk_pid129971 00:43:39.544 Removing: /var/run/dpdk/spdk_pid130436 00:43:39.544 Removing: /var/run/dpdk/spdk_pid130516 00:43:39.544 Removing: /var/run/dpdk/spdk_pid130621 00:43:39.544 Removing: /var/run/dpdk/spdk_pid130669 00:43:39.544 Removing: /var/run/dpdk/spdk_pid131935 00:43:39.544 Removing: /var/run/dpdk/spdk_pid132298 00:43:39.544 Removing: /var/run/dpdk/spdk_pid132481 00:43:39.544 Removing: /var/run/dpdk/spdk_pid133411 00:43:39.544 Removing: /var/run/dpdk/spdk_pid133793 00:43:39.544 Removing: /var/run/dpdk/spdk_pid133972 00:43:39.544 Removing: /var/run/dpdk/spdk_pid134913 00:43:39.544 Removing: /var/run/dpdk/spdk_pid135446 00:43:39.544 Removing: /var/run/dpdk/spdk_pid135633 00:43:39.544 Removing: /var/run/dpdk/spdk_pid137783 00:43:39.544 Removing: /var/run/dpdk/spdk_pid138271 00:43:39.544 Removing: /var/run/dpdk/spdk_pid138466 00:43:39.544 Removing: /var/run/dpdk/spdk_pid140640 00:43:39.544 Removing: /var/run/dpdk/spdk_pid141123 00:43:39.544 Removing: /var/run/dpdk/spdk_pid141316 00:43:39.544 Removing: /var/run/dpdk/spdk_pid143486 00:43:39.544 Removing: /var/run/dpdk/spdk_pid144240 00:43:39.544 Removing: /var/run/dpdk/spdk_pid144435 00:43:39.544 Removing: /var/run/dpdk/spdk_pid146845 00:43:39.544 Removing: /var/run/dpdk/spdk_pid147392 00:43:39.544 Removing: /var/run/dpdk/spdk_pid147602 00:43:39.544 Removing: /var/run/dpdk/spdk_pid150015 00:43:39.544 Removing: /var/run/dpdk/spdk_pid150559 00:43:39.544 Removing: /var/run/dpdk/spdk_pid150757 00:43:39.544 Removing: /var/run/dpdk/spdk_pid153169 00:43:39.544 Removing: /var/run/dpdk/spdk_pid154026 00:43:39.544 Removing: /var/run/dpdk/spdk_pid154231 00:43:39.544 Removing: /var/run/dpdk/spdk_pid154440 00:43:39.544 Removing: /var/run/dpdk/spdk_pid154992 00:43:39.544 Removing: /var/run/dpdk/spdk_pid155948 00:43:39.544 Removing: /var/run/dpdk/spdk_pid156426 00:43:39.544 Removing: /var/run/dpdk/spdk_pid157305 00:43:39.544 Removing: /var/run/dpdk/spdk_pid157872 00:43:39.544 Removing: /var/run/dpdk/spdk_pid158826 00:43:39.544 Removing: /var/run/dpdk/spdk_pid159339 00:43:39.544 Removing: /var/run/dpdk/spdk_pid162146 00:43:39.544 Removing: /var/run/dpdk/spdk_pid162894 00:43:39.544 Removing: /var/run/dpdk/spdk_pid163426 00:43:39.544 Removing: /var/run/dpdk/spdk_pid166485 00:43:39.544 Removing: /var/run/dpdk/spdk_pid167317 00:43:39.544 Removing: /var/run/dpdk/spdk_pid167923 00:43:39.544 Removing: /var/run/dpdk/spdk_pid169277 00:43:39.544 Removing: /var/run/dpdk/spdk_pid169791 00:43:39.544 Removing: /var/run/dpdk/spdk_pid171010 00:43:39.544 Removing: /var/run/dpdk/spdk_pid171515 00:43:39.544 Removing: /var/run/dpdk/spdk_pid172742 00:43:39.544 Removing: /var/run/dpdk/spdk_pid173249 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174076 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174119 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174157 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174203 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174325 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174461 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174676 00:43:39.544 Removing: /var/run/dpdk/spdk_pid174963 00:43:39.802 Removing: /var/run/dpdk/spdk_pid174979 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175027 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175035 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175056 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175075 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175091 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175111 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175131 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175147 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175162 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175189 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175198 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175218 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175234 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175254 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175270 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175290 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175308 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175319 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175359 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175376 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175408 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175478 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175517 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175532 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175571 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175582 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175595 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175647 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175659 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175694 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175711 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175727 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175735 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175747 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175764 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175769 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175786 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175812 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175854 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175869 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175901 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175916 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175919 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175975 00:43:39.802 Removing: /var/run/dpdk/spdk_pid175990 00:43:39.802 Removing: /var/run/dpdk/spdk_pid176020 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176035 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176049 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176059 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176071 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176088 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176093 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176109 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176191 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176241 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176356 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176373 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176411 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176464 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176490 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176512 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176533 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176566 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176590 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176669 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176716 00:43:39.803 Removing: /var/run/dpdk/spdk_pid176761 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177008 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177133 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177159 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177256 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177322 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177361 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177594 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177685 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177774 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177826 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177850 00:43:39.803 Removing: /var/run/dpdk/spdk_pid177928 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178339 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178368 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178667 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178758 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178853 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178898 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178921 00:43:39.803 Removing: /var/run/dpdk/spdk_pid178954 00:43:40.061 Removing: /var/run/dpdk/spdk_pid180247 00:43:40.061 Removing: /var/run/dpdk/spdk_pid180372 00:43:40.061 Removing: /var/run/dpdk/spdk_pid180376 00:43:40.061 Removing: /var/run/dpdk/spdk_pid180403 00:43:40.061 Removing: /var/run/dpdk/spdk_pid180891 00:43:40.061 Removing: /var/run/dpdk/spdk_pid180976 00:43:40.061 Removing: /var/run/dpdk/spdk_pid181605 00:43:40.061 Removing: /var/run/dpdk/spdk_pid182684 00:43:40.061 Removing: /var/run/dpdk/spdk_pid182736 00:43:40.061 Removing: /var/run/dpdk/spdk_pid182761 00:43:40.061 Removing: /var/run/dpdk/spdk_pid183030 00:43:40.061 Removing: /var/run/dpdk/spdk_pid183198 00:43:40.061 Removing: /var/run/dpdk/spdk_pid183287 00:43:40.061 Removing: /var/run/dpdk/spdk_pid183378 00:43:40.061 Removing: /var/run/dpdk/spdk_pid183429 00:43:40.061 Removing: /var/run/dpdk/spdk_pid183452 00:43:40.061 Clean 00:43:40.061 12:25:38 -- common/autotest_common.sh@1447 -- # return 0 00:43:40.061 12:25:38 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:43:40.061 12:25:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:40.061 12:25:38 -- common/autotest_common.sh@10 -- # set +x 00:43:40.061 12:25:38 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:43:40.061 12:25:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:40.061 12:25:38 -- common/autotest_common.sh@10 -- # set +x 00:43:40.061 12:25:38 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:43:40.061 12:25:38 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:43:40.061 12:25:38 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:43:40.319 12:25:38 -- spdk/autotest.sh@391 -- # hash lcov 00:43:40.319 12:25:38 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:43:40.319 12:25:38 -- spdk/autotest.sh@393 -- # hostname 00:43:40.319 12:25:38 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:43:40.319 geninfo: WARNING: invalid characters removed from testname! 00:44:26.970 12:26:18 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:26.970 12:26:24 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:28.345 12:26:27 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:31.633 12:26:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:34.915 12:26:33 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:37.440 12:26:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:44:40.722 12:26:38 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:40.722 12:26:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:44:40.722 12:26:38 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:44:40.722 12:26:38 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:40.722 12:26:38 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:40.722 12:26:38 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:40.722 12:26:38 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:40.722 12:26:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:40.722 12:26:38 -- paths/export.sh@5 -- $ export PATH 00:44:40.722 12:26:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:44:40.722 12:26:38 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:44:40.722 12:26:38 -- common/autobuild_common.sh@437 -- $ date +%s 00:44:40.722 12:26:38 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721564798.XXXXXX 00:44:40.722 12:26:38 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721564798.8Az41Y 00:44:40.722 12:26:38 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:44:40.722 12:26:38 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:44:40.722 12:26:38 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:44:40.722 12:26:38 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:44:40.722 12:26:38 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:44:40.722 12:26:38 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:44:40.722 12:26:38 -- common/autobuild_common.sh@453 -- $ get_config_params 00:44:40.722 12:26:38 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:44:40.722 12:26:38 -- common/autotest_common.sh@10 -- $ set +x 00:44:40.722 12:26:38 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:44:40.722 12:26:38 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:44:40.722 12:26:38 -- pm/common@17 -- $ local monitor 00:44:40.722 12:26:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:40.722 12:26:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:44:40.722 12:26:38 -- pm/common@25 -- $ sleep 1 00:44:40.722 12:26:38 -- pm/common@21 -- $ date +%s 00:44:40.722 12:26:38 -- pm/common@21 -- $ date +%s 00:44:40.723 12:26:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721564798 00:44:40.723 12:26:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721564798 00:44:40.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721564798_collect-vmstat.pm.log 00:44:40.723 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721564798_collect-cpu-load.pm.log 00:44:41.290 12:26:39 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:44:41.290 12:26:39 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:44:41.290 12:26:39 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:44:41.290 12:26:39 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:44:41.290 12:26:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:41.290 12:26:39 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:44:41.290 12:26:39 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:44:41.290 12:26:39 -- common/autotest_common.sh@720 -- $ xtrace_disable 00:44:41.290 12:26:39 -- common/autotest_common.sh@10 -- $ set +x 00:44:41.290 12:26:39 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:44:41.290 12:26:39 -- spdk/autopackage.sh@36 -- $ [[ -n v23.11 ]] 00:44:41.290 12:26:39 -- spdk/autopackage.sh@36 -- $ [[ -e /tmp/spdk-ld-path ]] 00:44:41.290 12:26:39 -- spdk/autopackage.sh@37 -- $ source /tmp/spdk-ld-path 00:44:41.290 12:26:39 -- tmp/spdk-ld-path@1 -- $ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:41.290 12:26:39 -- tmp/spdk-ld-path@1 -- $ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:44:41.290 12:26:39 -- tmp/spdk-ld-path@2 -- $ export PKG_CONFIG_PATH= 00:44:41.290 12:26:39 -- tmp/spdk-ld-path@2 -- $ PKG_CONFIG_PATH= 00:44:41.290 12:26:39 -- spdk/autopackage.sh@40 -- $ get_config_params 00:44:41.290 12:26:39 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:44:41.290 12:26:39 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:44:41.290 12:26:39 -- common/autotest_common.sh@10 -- $ set +x 00:44:41.290 12:26:40 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:44:41.290 12:26:40 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --enable-lto --disable-unit-tests 00:44:41.290 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:44:41.290 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:44:41.290 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:44:41.290 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:44:41.857 Using 'verbs' RDMA provider 00:44:54.630 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:45:06.893 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:45:06.893 Creating mk/config.mk...done. 00:45:06.893 Creating mk/cc.flags.mk...done. 00:45:06.893 Type 'make' to build. 00:45:06.893 12:27:04 -- spdk/autopackage.sh@43 -- $ make -j10 00:45:06.893 make[1]: Nothing to be done for 'all'. 00:45:06.893 CC lib/ut_mock/mock.o 00:45:06.893 CC lib/ut/ut.o 00:45:06.893 CC lib/log/log.o 00:45:06.893 CC lib/log/log_flags.o 00:45:06.893 CC lib/log/log_deprecated.o 00:45:06.893 LIB libspdk_ut_mock.a 00:45:06.893 LIB libspdk_ut.a 00:45:06.893 LIB libspdk_log.a 00:45:06.893 CC lib/dma/dma.o 00:45:06.893 CC lib/util/base64.o 00:45:06.893 CC lib/util/bit_array.o 00:45:06.893 CC lib/util/crc16.o 00:45:06.893 CC lib/util/crc32.o 00:45:06.893 CC lib/util/cpuset.o 00:45:06.893 CC lib/util/crc32c.o 00:45:06.893 CXX lib/trace_parser/trace.o 00:45:06.893 CC lib/ioat/ioat.o 00:45:06.893 CC lib/vfio_user/host/vfio_user_pci.o 00:45:06.893 CC lib/util/crc32_ieee.o 00:45:06.893 CC lib/util/crc64.o 00:45:06.893 CC lib/util/dif.o 00:45:06.893 CC lib/util/fd.o 00:45:06.893 LIB libspdk_dma.a 00:45:06.893 CC lib/util/file.o 00:45:06.893 CC lib/util/hexlify.o 00:45:06.893 CC lib/vfio_user/host/vfio_user.o 00:45:06.893 LIB libspdk_ioat.a 00:45:06.893 CC lib/util/iov.o 00:45:06.893 CC lib/util/math.o 00:45:06.893 CC lib/util/pipe.o 00:45:06.893 CC lib/util/strerror_tls.o 00:45:06.893 CC lib/util/string.o 00:45:06.893 CC lib/util/uuid.o 00:45:06.893 CC lib/util/fd_group.o 00:45:06.893 LIB libspdk_vfio_user.a 00:45:06.893 CC lib/util/xor.o 00:45:06.893 CC lib/util/zipf.o 00:45:06.893 LIB libspdk_util.a 00:45:07.151 LIB libspdk_trace_parser.a 00:45:07.151 CC lib/idxd/idxd.o 00:45:07.151 CC lib/idxd/idxd_user.o 00:45:07.151 CC lib/rdma/common.o 00:45:07.151 CC lib/rdma/rdma_verbs.o 00:45:07.151 CC lib/vmd/vmd.o 00:45:07.151 CC lib/vmd/led.o 00:45:07.151 CC lib/json/json_parse.o 00:45:07.151 CC lib/json/json_util.o 00:45:07.151 CC lib/conf/conf.o 00:45:07.151 CC lib/env_dpdk/env.o 00:45:07.151 CC lib/env_dpdk/memory.o 00:45:07.151 CC lib/env_dpdk/pci.o 00:45:07.151 CC lib/json/json_write.o 00:45:07.151 CC lib/env_dpdk/init.o 00:45:07.151 LIB libspdk_conf.a 00:45:07.409 LIB libspdk_rdma.a 00:45:07.409 CC lib/env_dpdk/threads.o 00:45:07.409 CC lib/env_dpdk/pci_ioat.o 00:45:07.409 CC lib/env_dpdk/pci_virtio.o 00:45:07.409 LIB libspdk_idxd.a 00:45:07.409 CC lib/env_dpdk/pci_vmd.o 00:45:07.409 CC lib/env_dpdk/pci_idxd.o 00:45:07.409 CC lib/env_dpdk/pci_event.o 00:45:07.409 CC lib/env_dpdk/sigbus_handler.o 00:45:07.409 LIB libspdk_json.a 00:45:07.409 CC lib/env_dpdk/pci_dpdk.o 00:45:07.410 CC lib/env_dpdk/pci_dpdk_2207.o 00:45:07.410 LIB libspdk_vmd.a 00:45:07.410 CC lib/env_dpdk/pci_dpdk_2211.o 00:45:07.668 CC lib/jsonrpc/jsonrpc_server.o 00:45:07.668 CC lib/jsonrpc/jsonrpc_client.o 00:45:07.668 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:45:07.668 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:45:07.668 LIB libspdk_jsonrpc.a 00:45:07.926 CC lib/rpc/rpc.o 00:45:07.926 LIB libspdk_env_dpdk.a 00:45:08.185 LIB libspdk_rpc.a 00:45:08.185 CC lib/keyring/keyring_rpc.o 00:45:08.185 CC lib/keyring/keyring.o 00:45:08.185 CC lib/notify/notify.o 00:45:08.185 CC lib/notify/notify_rpc.o 00:45:08.185 CC lib/trace/trace_flags.o 00:45:08.185 CC lib/trace/trace.o 00:45:08.185 CC lib/trace/trace_rpc.o 00:45:08.443 LIB libspdk_notify.a 00:45:08.443 LIB libspdk_trace.a 00:45:08.443 LIB libspdk_keyring.a 00:45:08.700 CC lib/sock/sock.o 00:45:08.700 CC lib/sock/sock_rpc.o 00:45:08.700 CC lib/thread/iobuf.o 00:45:08.700 CC lib/thread/thread.o 00:45:08.957 LIB libspdk_sock.a 00:45:09.214 CC lib/nvme/nvme_ctrlr_cmd.o 00:45:09.214 CC lib/nvme/nvme_ctrlr.o 00:45:09.214 CC lib/nvme/nvme_fabric.o 00:45:09.214 CC lib/nvme/nvme_ns_cmd.o 00:45:09.214 CC lib/nvme/nvme_ns.o 00:45:09.214 CC lib/nvme/nvme_pcie_common.o 00:45:09.214 CC lib/nvme/nvme_qpair.o 00:45:09.214 CC lib/nvme/nvme_pcie.o 00:45:09.214 CC lib/nvme/nvme.o 00:45:09.214 LIB libspdk_thread.a 00:45:09.214 CC lib/nvme/nvme_quirks.o 00:45:09.779 CC lib/nvme/nvme_transport.o 00:45:09.779 CC lib/nvme/nvme_discovery.o 00:45:09.779 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:45:09.779 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:45:09.779 CC lib/nvme/nvme_tcp.o 00:45:09.779 CC lib/accel/accel.o 00:45:09.779 CC lib/blob/blobstore.o 00:45:09.779 CC lib/blob/request.o 00:45:09.779 CC lib/init/json_config.o 00:45:10.036 CC lib/blob/zeroes.o 00:45:10.036 CC lib/init/subsystem.o 00:45:10.036 CC lib/init/subsystem_rpc.o 00:45:10.036 CC lib/init/rpc.o 00:45:10.036 CC lib/nvme/nvme_opal.o 00:45:10.293 CC lib/accel/accel_rpc.o 00:45:10.293 CC lib/accel/accel_sw.o 00:45:10.293 CC lib/nvme/nvme_io_msg.o 00:45:10.293 CC lib/nvme/nvme_poll_group.o 00:45:10.293 LIB libspdk_init.a 00:45:10.293 CC lib/blob/blob_bs_dev.o 00:45:10.293 CC lib/nvme/nvme_zns.o 00:45:10.293 CC lib/virtio/virtio.o 00:45:10.293 CC lib/virtio/virtio_vhost_user.o 00:45:10.293 LIB libspdk_accel.a 00:45:10.293 CC lib/virtio/virtio_vfio_user.o 00:45:10.551 CC lib/virtio/virtio_pci.o 00:45:10.551 CC lib/nvme/nvme_stubs.o 00:45:10.551 CC lib/nvme/nvme_auth.o 00:45:10.551 CC lib/event/app.o 00:45:10.551 CC lib/event/reactor.o 00:45:10.551 LIB libspdk_virtio.a 00:45:10.551 CC lib/event/log_rpc.o 00:45:10.551 CC lib/event/app_rpc.o 00:45:10.551 CC lib/event/scheduler_static.o 00:45:10.808 CC lib/nvme/nvme_cuse.o 00:45:10.808 CC lib/nvme/nvme_rdma.o 00:45:10.808 CC lib/bdev/bdev.o 00:45:10.808 CC lib/bdev/bdev_rpc.o 00:45:10.808 CC lib/bdev/bdev_zone.o 00:45:10.808 CC lib/bdev/part.o 00:45:10.808 LIB libspdk_event.a 00:45:10.808 CC lib/bdev/scsi_nvme.o 00:45:11.066 LIB libspdk_blob.a 00:45:11.324 CC lib/lvol/lvol.o 00:45:11.324 CC lib/blobfs/blobfs.o 00:45:11.324 CC lib/blobfs/tree.o 00:45:11.581 LIB libspdk_nvme.a 00:45:11.581 LIB libspdk_blobfs.a 00:45:11.581 LIB libspdk_lvol.a 00:45:11.839 LIB libspdk_bdev.a 00:45:11.839 CC lib/scsi/dev.o 00:45:11.839 CC lib/scsi/lun.o 00:45:11.839 CC lib/scsi/port.o 00:45:11.839 CC lib/scsi/scsi_bdev.o 00:45:11.839 CC lib/scsi/scsi.o 00:45:11.839 CC lib/scsi/scsi_pr.o 00:45:11.839 CC lib/scsi/scsi_rpc.o 00:45:11.839 CC lib/nvmf/ctrlr.o 00:45:11.839 CC lib/ftl/ftl_core.o 00:45:11.839 CC lib/nbd/nbd.o 00:45:12.097 CC lib/nbd/nbd_rpc.o 00:45:12.097 CC lib/nvmf/ctrlr_discovery.o 00:45:12.097 CC lib/nvmf/ctrlr_bdev.o 00:45:12.097 CC lib/nvmf/subsystem.o 00:45:12.097 CC lib/ftl/ftl_init.o 00:45:12.097 CC lib/ftl/ftl_layout.o 00:45:12.097 CC lib/ftl/ftl_debug.o 00:45:12.097 CC lib/scsi/task.o 00:45:12.097 LIB libspdk_nbd.a 00:45:12.097 CC lib/ftl/ftl_io.o 00:45:12.355 CC lib/nvmf/nvmf.o 00:45:12.355 CC lib/nvmf/nvmf_rpc.o 00:45:12.355 CC lib/nvmf/transport.o 00:45:12.355 CC lib/nvmf/tcp.o 00:45:12.355 CC lib/nvmf/stubs.o 00:45:12.355 CC lib/ftl/ftl_sb.o 00:45:12.355 LIB libspdk_scsi.a 00:45:12.355 CC lib/ftl/ftl_l2p.o 00:45:12.355 CC lib/ftl/ftl_l2p_flat.o 00:45:12.614 CC lib/ftl/ftl_nv_cache.o 00:45:12.614 CC lib/ftl/ftl_band.o 00:45:12.614 CC lib/ftl/ftl_band_ops.o 00:45:12.614 CC lib/nvmf/mdns_server.o 00:45:12.614 CC lib/nvmf/rdma.o 00:45:12.614 CC lib/nvmf/auth.o 00:45:12.614 CC lib/ftl/ftl_writer.o 00:45:12.614 CC lib/ftl/ftl_rq.o 00:45:12.614 CC lib/ftl/ftl_reloc.o 00:45:12.614 CC lib/iscsi/conn.o 00:45:12.614 CC lib/ftl/ftl_l2p_cache.o 00:45:12.614 CC lib/vhost/vhost.o 00:45:12.872 CC lib/ftl/ftl_p2l.o 00:45:12.872 CC lib/ftl/mngt/ftl_mngt.o 00:45:12.872 CC lib/vhost/vhost_rpc.o 00:45:12.872 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:45:12.873 CC lib/vhost/vhost_scsi.o 00:45:12.873 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:45:12.873 CC lib/iscsi/init_grp.o 00:45:12.873 CC lib/iscsi/iscsi.o 00:45:12.873 CC lib/iscsi/md5.o 00:45:13.131 CC lib/iscsi/param.o 00:45:13.131 CC lib/iscsi/portal_grp.o 00:45:13.131 CC lib/ftl/mngt/ftl_mngt_startup.o 00:45:13.131 CC lib/ftl/mngt/ftl_mngt_md.o 00:45:13.131 CC lib/ftl/mngt/ftl_mngt_misc.o 00:45:13.131 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:45:13.131 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:45:13.131 CC lib/vhost/vhost_blk.o 00:45:13.390 LIB libspdk_nvmf.a 00:45:13.390 CC lib/iscsi/tgt_node.o 00:45:13.390 CC lib/iscsi/iscsi_subsystem.o 00:45:13.390 CC lib/vhost/rte_vhost_user.o 00:45:13.390 CC lib/iscsi/iscsi_rpc.o 00:45:13.390 CC lib/ftl/mngt/ftl_mngt_band.o 00:45:13.390 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:45:13.390 CC lib/iscsi/task.o 00:45:13.390 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:45:13.649 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:45:13.649 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:45:13.649 CC lib/ftl/utils/ftl_conf.o 00:45:13.649 CC lib/ftl/utils/ftl_md.o 00:45:13.649 CC lib/ftl/utils/ftl_mempool.o 00:45:13.649 CC lib/ftl/utils/ftl_bitmap.o 00:45:13.649 LIB libspdk_iscsi.a 00:45:13.649 CC lib/ftl/utils/ftl_property.o 00:45:13.649 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:45:13.649 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:45:13.649 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:45:13.649 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:45:13.649 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:45:13.649 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:45:13.908 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:45:13.908 CC lib/ftl/upgrade/ftl_sb_v3.o 00:45:13.908 CC lib/ftl/upgrade/ftl_sb_v5.o 00:45:13.908 CC lib/ftl/nvc/ftl_nvc_dev.o 00:45:13.908 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:45:13.908 CC lib/ftl/base/ftl_base_dev.o 00:45:13.908 CC lib/ftl/base/ftl_base_bdev.o 00:45:13.908 LIB libspdk_vhost.a 00:45:13.908 LIB libspdk_ftl.a 00:45:14.477 CC module/env_dpdk/env_dpdk_rpc.o 00:45:14.477 CC module/keyring/file/keyring.o 00:45:14.477 CC module/scheduler/gscheduler/gscheduler.o 00:45:14.477 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:45:14.477 CC module/scheduler/dynamic/scheduler_dynamic.o 00:45:14.477 CC module/accel/error/accel_error.o 00:45:14.477 CC module/sock/posix/posix.o 00:45:14.477 CC module/keyring/linux/keyring.o 00:45:14.477 CC module/blob/bdev/blob_bdev.o 00:45:14.477 CC module/accel/ioat/accel_ioat.o 00:45:14.477 LIB libspdk_env_dpdk_rpc.a 00:45:14.477 CC module/keyring/linux/keyring_rpc.o 00:45:14.477 LIB libspdk_scheduler_dpdk_governor.a 00:45:14.477 CC module/keyring/file/keyring_rpc.o 00:45:14.477 LIB libspdk_scheduler_gscheduler.a 00:45:14.477 CC module/accel/ioat/accel_ioat_rpc.o 00:45:14.477 LIB libspdk_scheduler_dynamic.a 00:45:14.477 CC module/accel/error/accel_error_rpc.o 00:45:14.477 LIB libspdk_blob_bdev.a 00:45:14.477 LIB libspdk_keyring_linux.a 00:45:14.477 LIB libspdk_keyring_file.a 00:45:14.477 LIB libspdk_accel_ioat.a 00:45:14.736 LIB libspdk_accel_error.a 00:45:14.736 CC module/accel/dsa/accel_dsa.o 00:45:14.736 CC module/accel/iaa/accel_iaa.o 00:45:14.736 CC module/accel/iaa/accel_iaa_rpc.o 00:45:14.736 CC module/blobfs/bdev/blobfs_bdev.o 00:45:14.736 CC module/bdev/gpt/gpt.o 00:45:14.736 CC module/bdev/delay/vbdev_delay.o 00:45:14.736 CC module/bdev/lvol/vbdev_lvol.o 00:45:14.736 CC module/bdev/error/vbdev_error.o 00:45:14.736 CC module/bdev/malloc/bdev_malloc.o 00:45:14.736 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:45:14.736 LIB libspdk_accel_iaa.a 00:45:14.736 CC module/accel/dsa/accel_dsa_rpc.o 00:45:14.736 CC module/bdev/gpt/vbdev_gpt.o 00:45:14.736 LIB libspdk_sock_posix.a 00:45:14.736 CC module/bdev/error/vbdev_error_rpc.o 00:45:14.736 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:45:14.994 LIB libspdk_accel_dsa.a 00:45:14.994 CC module/bdev/malloc/bdev_malloc_rpc.o 00:45:14.994 CC module/bdev/delay/vbdev_delay_rpc.o 00:45:14.994 LIB libspdk_bdev_error.a 00:45:14.994 LIB libspdk_bdev_gpt.a 00:45:14.994 LIB libspdk_blobfs_bdev.a 00:45:14.994 CC module/bdev/null/bdev_null.o 00:45:14.994 CC module/bdev/null/bdev_null_rpc.o 00:45:14.994 LIB libspdk_bdev_lvol.a 00:45:14.994 CC module/bdev/nvme/bdev_nvme.o 00:45:14.994 LIB libspdk_bdev_malloc.a 00:45:14.994 LIB libspdk_bdev_delay.a 00:45:14.994 CC module/bdev/raid/bdev_raid.o 00:45:14.994 CC module/bdev/passthru/vbdev_passthru.o 00:45:14.994 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:45:14.994 CC module/bdev/raid/bdev_raid_rpc.o 00:45:14.994 CC module/bdev/split/vbdev_split.o 00:45:14.994 CC module/bdev/zone_block/vbdev_zone_block.o 00:45:15.253 CC module/bdev/aio/bdev_aio.o 00:45:15.253 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:45:15.253 LIB libspdk_bdev_null.a 00:45:15.253 CC module/bdev/split/vbdev_split_rpc.o 00:45:15.253 CC module/bdev/aio/bdev_aio_rpc.o 00:45:15.253 CC module/bdev/raid/bdev_raid_sb.o 00:45:15.253 CC module/bdev/raid/raid0.o 00:45:15.253 LIB libspdk_bdev_passthru.a 00:45:15.253 LIB libspdk_bdev_zone_block.a 00:45:15.253 LIB libspdk_bdev_split.a 00:45:15.253 CC module/bdev/raid/raid1.o 00:45:15.253 CC module/bdev/raid/concat.o 00:45:15.253 CC module/bdev/raid/raid5f.o 00:45:15.512 LIB libspdk_bdev_aio.a 00:45:15.512 CC module/bdev/ftl/bdev_ftl.o 00:45:15.512 CC module/bdev/ftl/bdev_ftl_rpc.o 00:45:15.512 CC module/bdev/nvme/bdev_nvme_rpc.o 00:45:15.512 CC module/bdev/iscsi/bdev_iscsi.o 00:45:15.512 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:45:15.512 CC module/bdev/nvme/nvme_rpc.o 00:45:15.512 CC module/bdev/nvme/bdev_mdns_client.o 00:45:15.512 CC module/bdev/virtio/bdev_virtio_scsi.o 00:45:15.512 CC module/bdev/virtio/bdev_virtio_blk.o 00:45:15.512 CC module/bdev/nvme/vbdev_opal.o 00:45:15.512 LIB libspdk_bdev_raid.a 00:45:15.512 LIB libspdk_bdev_ftl.a 00:45:15.770 CC module/bdev/virtio/bdev_virtio_rpc.o 00:45:15.770 CC module/bdev/nvme/vbdev_opal_rpc.o 00:45:15.770 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:45:15.770 LIB libspdk_bdev_iscsi.a 00:45:15.770 LIB libspdk_bdev_virtio.a 00:45:16.029 LIB libspdk_bdev_nvme.a 00:45:16.287 CC module/event/subsystems/scheduler/scheduler.o 00:45:16.287 CC module/event/subsystems/vmd/vmd.o 00:45:16.287 CC module/event/subsystems/vmd/vmd_rpc.o 00:45:16.287 CC module/event/subsystems/sock/sock.o 00:45:16.287 CC module/event/subsystems/iobuf/iobuf.o 00:45:16.287 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:45:16.287 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:45:16.287 CC module/event/subsystems/keyring/keyring.o 00:45:16.287 LIB libspdk_event_vhost_blk.a 00:45:16.287 LIB libspdk_event_sock.a 00:45:16.287 LIB libspdk_event_scheduler.a 00:45:16.544 LIB libspdk_event_vmd.a 00:45:16.545 LIB libspdk_event_iobuf.a 00:45:16.545 LIB libspdk_event_keyring.a 00:45:16.545 CC module/event/subsystems/accel/accel.o 00:45:16.803 LIB libspdk_event_accel.a 00:45:17.060 CC module/event/subsystems/bdev/bdev.o 00:45:17.060 LIB libspdk_event_bdev.a 00:45:17.319 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:45:17.319 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:45:17.319 CC module/event/subsystems/scsi/scsi.o 00:45:17.319 CC module/event/subsystems/nbd/nbd.o 00:45:17.577 LIB libspdk_event_nbd.a 00:45:17.577 LIB libspdk_event_scsi.a 00:45:17.577 LIB libspdk_event_nvmf.a 00:45:17.836 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:45:17.836 CC module/event/subsystems/iscsi/iscsi.o 00:45:17.836 LIB libspdk_event_vhost_scsi.a 00:45:17.836 LIB libspdk_event_iscsi.a 00:45:18.094 CXX app/trace/trace.o 00:45:18.094 CC app/trace_record/trace_record.o 00:45:18.094 CC app/spdk_lspci/spdk_lspci.o 00:45:18.094 CC app/spdk_nvme_identify/identify.o 00:45:18.094 CC app/spdk_nvme_perf/perf.o 00:45:18.094 CC app/nvmf_tgt/nvmf_main.o 00:45:18.094 CC app/iscsi_tgt/iscsi_tgt.o 00:45:18.094 CC app/spdk_tgt/spdk_tgt.o 00:45:18.094 CC examples/accel/perf/accel_perf.o 00:45:18.353 LINK spdk_lspci 00:45:18.353 CC test/accel/dif/dif.o 00:45:18.353 LINK spdk_trace_record 00:45:18.353 LINK nvmf_tgt 00:45:18.353 LINK iscsi_tgt 00:45:18.353 LINK spdk_tgt 00:45:18.353 LINK spdk_trace 00:45:18.611 LINK accel_perf 00:45:18.611 LINK spdk_nvme_identify 00:45:18.611 LINK dif 00:45:18.611 LINK spdk_nvme_perf 00:45:22.797 CC test/app/bdev_svc/bdev_svc.o 00:45:23.056 LINK bdev_svc 00:45:29.618 CC test/bdev/bdevio/bdevio.o 00:45:30.997 LINK bdevio 00:45:39.145 CC examples/bdev/hello_world/hello_bdev.o 00:45:39.713 LINK hello_bdev 00:45:39.971 CC app/spdk_nvme_discover/discovery_aer.o 00:45:40.906 LINK spdk_nvme_discover 00:45:47.478 CC test/blobfs/mkfs/mkfs.o 00:45:48.413 LINK mkfs 00:45:58.384 TEST_HEADER include/spdk/config.h 00:45:58.384 CXX test/cpp_headers/accel.o 00:45:58.384 CXX test/cpp_headers/accel_module.o 00:45:59.761 CXX test/cpp_headers/assert.o 00:46:00.697 CXX test/cpp_headers/barrier.o 00:46:02.601 CXX test/cpp_headers/base64.o 00:46:03.612 CXX test/cpp_headers/bdev.o 00:46:05.519 CXX test/cpp_headers/bdev_module.o 00:46:06.895 CXX test/cpp_headers/bdev_zone.o 00:46:08.269 CXX test/cpp_headers/bit_array.o 00:46:09.643 CXX test/cpp_headers/bit_pool.o 00:46:11.017 CXX test/cpp_headers/blob.o 00:46:11.951 CXX test/cpp_headers/blob_bdev.o 00:46:13.855 CXX test/cpp_headers/blobfs.o 00:46:15.229 CXX test/cpp_headers/blobfs_bdev.o 00:46:16.603 CXX test/cpp_headers/conf.o 00:46:17.978 CXX test/cpp_headers/config.o 00:46:17.978 CXX test/cpp_headers/cpuset.o 00:46:19.353 CXX test/cpp_headers/crc16.o 00:46:19.611 CXX test/cpp_headers/crc32.o 00:46:20.985 CXX test/cpp_headers/crc64.o 00:46:21.923 CC examples/blob/hello_world/hello_blob.o 00:46:22.492 CXX test/cpp_headers/dif.o 00:46:23.062 LINK hello_blob 00:46:23.998 CXX test/cpp_headers/dma.o 00:46:25.384 CXX test/cpp_headers/endian.o 00:46:26.760 CXX test/cpp_headers/env.o 00:46:27.695 CXX test/cpp_headers/env_dpdk.o 00:46:29.069 CXX test/cpp_headers/event.o 00:46:30.970 CXX test/cpp_headers/fd.o 00:46:31.908 CXX test/cpp_headers/fd_group.o 00:46:33.306 CXX test/cpp_headers/file.o 00:46:34.678 CXX test/cpp_headers/ftl.o 00:46:36.575 CXX test/cpp_headers/gpt_spec.o 00:46:37.510 CXX test/cpp_headers/hexlify.o 00:46:38.888 CXX test/cpp_headers/histogram_data.o 00:46:40.314 CXX test/cpp_headers/idxd.o 00:46:41.700 CXX test/cpp_headers/idxd_spec.o 00:46:43.076 CXX test/cpp_headers/init.o 00:46:44.452 CXX test/cpp_headers/ioat.o 00:46:46.352 CXX test/cpp_headers/ioat_spec.o 00:46:47.287 CXX test/cpp_headers/iscsi_spec.o 00:46:49.188 CXX test/cpp_headers/json.o 00:46:50.573 CXX test/cpp_headers/jsonrpc.o 00:46:51.947 CXX test/cpp_headers/keyring.o 00:46:53.321 CXX test/cpp_headers/keyring_module.o 00:46:54.692 CXX test/cpp_headers/likely.o 00:46:56.066 CXX test/cpp_headers/log.o 00:46:57.440 CXX test/cpp_headers/lvol.o 00:46:58.817 CXX test/cpp_headers/memory.o 00:47:00.196 CXX test/cpp_headers/mmio.o 00:47:01.130 CXX test/cpp_headers/nbd.o 00:47:01.130 CXX test/cpp_headers/notify.o 00:47:03.030 CXX test/cpp_headers/nvme.o 00:47:04.949 CXX test/cpp_headers/nvme_intel.o 00:47:06.335 CXX test/cpp_headers/nvme_ocssd.o 00:47:07.707 CXX test/cpp_headers/nvme_ocssd_spec.o 00:47:09.604 CXX test/cpp_headers/nvme_spec.o 00:47:10.977 CXX test/cpp_headers/nvme_zns.o 00:47:12.878 CXX test/cpp_headers/nvmf.o 00:47:14.257 CXX test/cpp_headers/nvmf_cmd.o 00:47:16.158 CXX test/cpp_headers/nvmf_fc_spec.o 00:47:18.057 CXX test/cpp_headers/nvmf_spec.o 00:47:19.432 CXX test/cpp_headers/nvmf_transport.o 00:47:21.338 CXX test/cpp_headers/opal.o 00:47:22.714 CXX test/cpp_headers/opal_spec.o 00:47:24.090 CXX test/cpp_headers/pci_ids.o 00:47:25.024 CXX test/cpp_headers/pipe.o 00:47:26.396 CXX test/cpp_headers/queue.o 00:47:26.654 CXX test/cpp_headers/reduce.o 00:47:28.030 CXX test/cpp_headers/rpc.o 00:47:28.289 CXX test/cpp_headers/scheduler.o 00:47:30.194 CXX test/cpp_headers/scsi.o 00:47:30.454 CC test/dma/test_dma/test_dma.o 00:47:31.391 CXX test/cpp_headers/scsi_spec.o 00:47:32.771 CXX test/cpp_headers/sock.o 00:47:33.031 LINK test_dma 00:47:33.969 CXX test/cpp_headers/stdinc.o 00:47:34.915 CXX test/cpp_headers/string.o 00:47:36.322 CXX test/cpp_headers/thread.o 00:47:37.256 CXX test/cpp_headers/trace.o 00:47:38.189 CXX test/cpp_headers/trace_parser.o 00:47:39.123 CXX test/cpp_headers/tree.o 00:47:39.123 CXX test/cpp_headers/ublk.o 00:47:40.059 CXX test/cpp_headers/util.o 00:47:40.996 CXX test/cpp_headers/uuid.o 00:47:41.930 CXX test/cpp_headers/version.o 00:47:42.188 CXX test/cpp_headers/vfio_user_pci.o 00:47:43.121 CXX test/cpp_headers/vfio_user_spec.o 00:47:44.054 CXX test/cpp_headers/vhost.o 00:47:44.989 CXX test/cpp_headers/vmd.o 00:47:45.923 CXX test/cpp_headers/xor.o 00:47:46.494 CXX test/cpp_headers/zipf.o 00:47:46.753 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:47:47.689 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:47:47.948 LINK nvme_fuzz 00:47:48.206 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:47:48.771 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:47:50.155 LINK vhost_fuzz 00:47:51.089 CC test/env/mem_callbacks/mem_callbacks.o 00:47:51.089 LINK iscsi_fuzz 00:47:53.620 CC test/event/event_perf/event_perf.o 00:47:53.620 LINK mem_callbacks 00:47:54.187 LINK event_perf 00:47:54.187 CC examples/bdev/bdevperf/bdevperf.o 00:47:56.715 LINK bdevperf 00:47:57.649 CC test/env/vtophys/vtophys.o 00:47:58.581 LINK vtophys 00:48:03.886 CC test/lvol/esnap/esnap.o 00:48:05.784 CC app/spdk_top/spdk_top.o 00:48:07.683 LINK spdk_top 00:48:08.250 CC test/nvme/aer/aer.o 00:48:09.185 CC test/event/reactor/reactor.o 00:48:09.185 LINK aer 00:48:09.753 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:48:09.753 LINK reactor 00:48:10.319 LINK env_dpdk_post_init 00:48:13.610 LINK esnap 00:48:16.894 CC test/nvme/reset/reset.o 00:48:16.894 CC test/app/histogram_perf/histogram_perf.o 00:48:17.459 LINK histogram_perf 00:48:18.026 LINK reset 00:48:20.557 CC app/vhost/vhost.o 00:48:21.490 LINK vhost 00:48:22.866 CC test/event/reactor_perf/reactor_perf.o 00:48:23.800 LINK reactor_perf 00:48:24.058 CC examples/blob/cli/blobcli.o 00:48:25.435 LINK blobcli 00:48:33.552 CC test/app/jsoncat/jsoncat.o 00:48:33.811 LINK jsoncat 00:48:36.349 CC test/env/memory/memory_ut.o 00:48:37.283 CC test/env/pci/pci_ut.o 00:48:38.254 LINK pci_ut 00:48:39.628 LINK memory_ut 00:48:39.886 CC test/event/app_repeat/app_repeat.o 00:48:40.821 LINK app_repeat 00:48:42.214 CC examples/ioat/perf/perf.o 00:48:42.780 LINK ioat_perf 00:48:45.304 CC examples/ioat/verify/verify.o 00:48:45.869 LINK verify 00:48:46.135 CC test/app/stub/stub.o 00:48:46.701 CC examples/nvme/hello_world/hello_world.o 00:48:46.701 CC test/nvme/sgl/sgl.o 00:48:46.961 LINK stub 00:48:47.529 LINK hello_world 00:48:47.787 LINK sgl 00:48:57.756 CC examples/nvme/reconnect/reconnect.o 00:48:59.134 LINK reconnect 00:49:02.416 CC examples/nvme/nvme_manage/nvme_manage.o 00:49:04.942 LINK nvme_manage 00:49:23.024 CC examples/sock/hello_world/hello_sock.o 00:49:23.024 CC test/event/scheduler/scheduler.o 00:49:23.024 LINK hello_sock 00:49:23.024 LINK scheduler 00:49:23.591 CC test/nvme/e2edp/nvme_dp.o 00:49:24.973 LINK nvme_dp 00:49:25.585 CC test/nvme/overhead/overhead.o 00:49:27.488 LINK overhead 00:49:31.672 CC test/rpc_client/rpc_client_test.o 00:49:32.609 LINK rpc_client_test 00:49:39.169 CC test/thread/poller_perf/poller_perf.o 00:49:39.428 LINK poller_perf 00:49:39.996 CC test/thread/lock/spdk_lock.o 00:49:41.900 CC examples/nvme/arbitration/arbitration.o 00:49:43.278 LINK arbitration 00:49:43.538 LINK spdk_lock 00:49:48.805 CC app/spdk_dd/spdk_dd.o 00:49:50.710 LINK spdk_dd 00:49:51.276 CC app/fio/nvme/fio_plugin.o 00:49:53.178 LINK spdk_nvme 00:49:55.076 CC app/fio/bdev/fio_plugin.o 00:49:56.008 CC examples/nvme/hotplug/hotplug.o 00:49:56.574 LINK spdk_bdev 00:49:57.139 LINK hotplug 00:49:58.513 CC test/nvme/err_injection/err_injection.o 00:49:58.514 CC test/nvme/startup/startup.o 00:49:59.448 LINK err_injection 00:49:59.448 LINK startup 00:50:17.564 CC test/nvme/reserve/reserve.o 00:50:18.130 LINK reserve 00:50:18.130 CC test/nvme/simple_copy/simple_copy.o 00:50:19.505 LINK simple_copy 00:50:24.774 CC test/nvme/connect_stress/connect_stress.o 00:50:26.148 LINK connect_stress 00:50:32.708 CC examples/nvme/cmb_copy/cmb_copy.o 00:50:32.708 CC examples/nvme/abort/abort.o 00:50:32.708 LINK cmb_copy 00:50:33.276 LINK abort 00:50:35.177 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:50:36.115 LINK pmr_persistence 00:50:39.402 CC examples/vmd/lsvmd/lsvmd.o 00:50:40.334 LINK lsvmd 00:50:55.204 CC test/nvme/boot_partition/boot_partition.o 00:50:55.204 LINK boot_partition 00:50:55.204 CC test/nvme/compliance/nvme_compliance.o 00:50:56.581 LINK nvme_compliance 00:51:06.579 CC test/nvme/fused_ordering/fused_ordering.o 00:51:06.579 LINK fused_ordering 00:51:10.768 CC test/nvme/doorbell_aers/doorbell_aers.o 00:51:11.026 LINK doorbell_aers 00:51:12.402 CC examples/vmd/led/led.o 00:51:12.661 LINK led 00:51:13.229 CC test/nvme/fdp/fdp.o 00:51:14.166 CC examples/nvmf/nvmf/nvmf.o 00:51:14.732 LINK fdp 00:51:15.666 LINK nvmf 00:51:20.947 CC examples/util/zipf/zipf.o 00:51:21.204 LINK zipf 00:51:26.468 CC test/nvme/cuse/cuse.o 00:51:27.841 CC examples/thread/thread/thread_ex.o 00:51:28.775 LINK thread 00:51:31.308 LINK cuse 00:51:31.875 CC examples/idxd/perf/perf.o 00:51:32.816 LINK idxd_perf 00:51:36.100 CC examples/interrupt_tgt/interrupt_tgt.o 00:51:37.141 LINK interrupt_tgt 00:52:15.855 12:34:12 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:52:15.855 make[1]: Nothing to be done for 'clean'. 00:52:19.132 12:34:17 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:52:19.132 12:34:17 -- common/autotest_common.sh@726 -- $ xtrace_disable 00:52:19.132 12:34:17 -- common/autotest_common.sh@10 -- $ set +x 00:52:19.132 12:34:17 -- spdk/autopackage.sh@48 -- $ timing_finish 00:52:19.132 12:34:17 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:52:19.132 12:34:17 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:52:19.132 12:34:17 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:52:19.132 12:34:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:52:19.132 12:34:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:52:19.132 12:34:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:52:19.132 12:34:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:19.132 12:34:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:52:19.132 12:34:17 -- pm/common@44 -- $ pid=184986 00:52:19.132 12:34:17 -- pm/common@50 -- $ kill -TERM 184986 00:52:19.132 12:34:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:52:19.132 12:34:17 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:52:19.132 12:34:17 -- pm/common@44 -- $ pid=184987 00:52:19.132 12:34:17 -- pm/common@50 -- $ kill -TERM 184987 00:52:19.132 + [[ -n 2301 ]] 00:52:19.132 + sudo kill 2301 00:52:19.707 [Pipeline] } 00:52:19.726 [Pipeline] // timeout 00:52:19.731 [Pipeline] } 00:52:19.748 [Pipeline] // stage 00:52:19.753 [Pipeline] } 00:52:19.770 [Pipeline] // catchError 00:52:19.780 [Pipeline] stage 00:52:19.782 [Pipeline] { (Stop VM) 00:52:19.796 [Pipeline] sh 00:52:20.076 + vagrant halt 00:52:23.356 ==> default: Halting domain... 00:52:33.330 [Pipeline] sh 00:52:33.604 + vagrant destroy -f 00:52:36.132 ==> default: Removing domain... 00:52:36.708 [Pipeline] sh 00:52:36.993 + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output 00:52:37.003 [Pipeline] } 00:52:37.020 [Pipeline] // stage 00:52:37.026 [Pipeline] } 00:52:37.042 [Pipeline] // dir 00:52:37.047 [Pipeline] } 00:52:37.062 [Pipeline] // wrap 00:52:37.068 [Pipeline] } 00:52:37.078 [Pipeline] // catchError 00:52:37.085 [Pipeline] stage 00:52:37.087 [Pipeline] { (Epilogue) 00:52:37.095 [Pipeline] sh 00:52:37.377 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:52:55.519 [Pipeline] catchError 00:52:55.521 [Pipeline] { 00:52:55.533 [Pipeline] sh 00:52:55.812 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:52:56.069 Artifacts sizes are good 00:52:56.077 [Pipeline] } 00:52:56.094 [Pipeline] // catchError 00:52:56.104 [Pipeline] archiveArtifacts 00:52:56.110 Archiving artifacts 00:52:56.465 [Pipeline] cleanWs 00:52:56.475 [WS-CLEANUP] Deleting project workspace... 00:52:56.475 [WS-CLEANUP] Deferred wipeout is used... 00:52:56.481 [WS-CLEANUP] done 00:52:56.482 [Pipeline] } 00:52:56.497 [Pipeline] // stage 00:52:56.502 [Pipeline] } 00:52:56.515 [Pipeline] // node 00:52:56.520 [Pipeline] End of Pipeline 00:52:56.617 Finished: SUCCESS